00:00:00.000 Started by upstream project "autotest-spdk-v24.01-LTS-vs-dpdk-v22.11" build number 982 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3649 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.113 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.114 The recommended git tool is: git 00:00:00.114 using credential 00000000-0000-0000-0000-000000000002 00:00:00.116 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.178 Fetching changes from the remote Git repository 00:00:00.181 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.219 Using shallow fetch with depth 1 00:00:00.219 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.219 > git --version # timeout=10 00:00:00.248 > git --version # 'git version 2.39.2' 00:00:00.248 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.263 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.263 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.656 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.667 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.679 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.679 > git config core.sparsecheckout # timeout=10 00:00:06.691 > git read-tree -mu HEAD # timeout=10 00:00:06.706 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.728 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.728 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:06.824 [Pipeline] Start of Pipeline 00:00:06.838 [Pipeline] library 00:00:06.839 Loading library shm_lib@master 00:00:06.839 Library shm_lib@master is cached. Copying from home. 00:00:06.852 [Pipeline] node 00:00:06.862 Running on WFP21 in /var/jenkins/workspace/nvmf-phy-autotest 00:00:06.864 [Pipeline] { 00:00:06.872 [Pipeline] catchError 00:00:06.873 [Pipeline] { 00:00:06.882 [Pipeline] wrap 00:00:06.889 [Pipeline] { 00:00:06.894 [Pipeline] stage 00:00:06.895 [Pipeline] { (Prologue) 00:00:07.078 [Pipeline] sh 00:00:07.364 + logger -p user.info -t JENKINS-CI 00:00:07.382 [Pipeline] echo 00:00:07.384 Node: WFP21 00:00:07.391 [Pipeline] sh 00:00:07.692 [Pipeline] setCustomBuildProperty 00:00:07.705 [Pipeline] echo 00:00:07.707 Cleanup processes 00:00:07.712 [Pipeline] sh 00:00:07.999 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:07.999 1068204 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.013 [Pipeline] sh 00:00:08.299 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:00:08.299 ++ grep -v 'sudo pgrep' 00:00:08.299 ++ awk '{print $1}' 00:00:08.299 + sudo kill -9 00:00:08.299 + true 00:00:08.313 [Pipeline] cleanWs 00:00:08.322 [WS-CLEANUP] Deleting project workspace... 00:00:08.322 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.327 [WS-CLEANUP] done 00:00:08.331 [Pipeline] setCustomBuildProperty 00:00:08.344 [Pipeline] sh 00:00:08.628 + sudo git config --global --replace-all safe.directory '*' 00:00:08.710 [Pipeline] httpRequest 00:00:09.029 [Pipeline] echo 00:00:09.031 Sorcerer 10.211.164.20 is alive 00:00:09.038 [Pipeline] retry 00:00:09.040 [Pipeline] { 00:00:09.051 [Pipeline] httpRequest 00:00:09.055 HttpMethod: GET 00:00:09.055 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.056 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.075 Response Code: HTTP/1.1 200 OK 00:00:09.075 Success: Status code 200 is in the accepted range: 200,404 00:00:09.076 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.425 [Pipeline] } 00:00:10.442 [Pipeline] // retry 00:00:10.449 [Pipeline] sh 00:00:10.735 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.755 [Pipeline] httpRequest 00:00:11.111 [Pipeline] echo 00:00:11.113 Sorcerer 10.211.164.20 is alive 00:00:11.122 [Pipeline] retry 00:00:11.124 [Pipeline] { 00:00:11.140 [Pipeline] httpRequest 00:00:11.144 HttpMethod: GET 00:00:11.145 URL: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.146 Sending request to url: http://10.211.164.20/packages/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:00:11.167 Response Code: HTTP/1.1 200 OK 00:00:11.167 Success: Status code 200 is in the accepted range: 200,404 00:00:11.168 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:30.711 [Pipeline] } 00:01:30.731 [Pipeline] // retry 00:01:30.739 [Pipeline] sh 00:01:31.030 + tar --no-same-owner -xf spdk_c13c99a5eba3bff912124706e0ae1d70defef44d.tar.gz 00:01:33.584 [Pipeline] sh 00:01:33.871 + git -C spdk log --oneline -n5 00:01:33.871 c13c99a5e test: Various fixes for Fedora40 00:01:33.871 726a04d70 test/nvmf: adjust timeout for bigger nvmes 00:01:33.871 61c96acfb dpdk: Point dpdk submodule at a latest fix from spdk-23.11 00:01:33.871 7db6dcdb8 nvme/fio_plugin: update the way ruhs descriptors are fetched 00:01:33.871 ff6f5c41e nvme/fio_plugin: trim add support for multiple ranges 00:01:33.889 [Pipeline] withCredentials 00:01:33.900 > git --version # timeout=10 00:01:33.912 > git --version # 'git version 2.39.2' 00:01:33.930 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:01:33.932 [Pipeline] { 00:01:33.941 [Pipeline] retry 00:01:33.943 [Pipeline] { 00:01:33.958 [Pipeline] sh 00:01:34.243 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:01:34.255 [Pipeline] } 00:01:34.272 [Pipeline] // retry 00:01:34.277 [Pipeline] } 00:01:34.293 [Pipeline] // withCredentials 00:01:34.303 [Pipeline] httpRequest 00:01:34.774 [Pipeline] echo 00:01:34.776 Sorcerer 10.211.164.20 is alive 00:01:34.785 [Pipeline] retry 00:01:34.787 [Pipeline] { 00:01:34.801 [Pipeline] httpRequest 00:01:34.806 HttpMethod: GET 00:01:34.806 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.807 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:34.810 Response Code: HTTP/1.1 200 OK 00:01:34.811 Success: Status code 200 is in the accepted range: 200,404 00:01:34.811 Saving response body to /var/jenkins/workspace/nvmf-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:41.853 [Pipeline] } 00:01:41.872 [Pipeline] // retry 00:01:41.881 [Pipeline] sh 00:01:42.171 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:01:43.563 [Pipeline] sh 00:01:43.849 + git -C dpdk log --oneline -n5 00:01:43.849 caf0f5d395 version: 22.11.4 00:01:43.849 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:43.849 dc9c799c7d vhost: fix missing spinlock unlock 00:01:43.849 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:43.849 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:43.859 [Pipeline] } 00:01:43.876 [Pipeline] // stage 00:01:43.887 [Pipeline] stage 00:01:43.889 [Pipeline] { (Prepare) 00:01:43.910 [Pipeline] writeFile 00:01:43.927 [Pipeline] sh 00:01:44.214 + logger -p user.info -t JENKINS-CI 00:01:44.226 [Pipeline] sh 00:01:44.511 + logger -p user.info -t JENKINS-CI 00:01:44.530 [Pipeline] sh 00:01:44.862 + cat autorun-spdk.conf 00:01:44.862 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.862 SPDK_TEST_NVMF=1 00:01:44.862 SPDK_TEST_NVME_CLI=1 00:01:44.862 SPDK_TEST_NVMF_NICS=mlx5 00:01:44.862 SPDK_RUN_UBSAN=1 00:01:44.862 NET_TYPE=phy 00:01:44.862 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:44.862 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:44.870 RUN_NIGHTLY=1 00:01:44.874 [Pipeline] readFile 00:01:44.891 [Pipeline] withEnv 00:01:44.892 [Pipeline] { 00:01:44.904 [Pipeline] sh 00:01:45.190 + set -ex 00:01:45.190 + [[ -f /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf ]] 00:01:45.190 + source /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:45.190 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:45.190 ++ SPDK_TEST_NVMF=1 00:01:45.190 ++ SPDK_TEST_NVME_CLI=1 00:01:45.190 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:45.190 ++ SPDK_RUN_UBSAN=1 00:01:45.190 ++ NET_TYPE=phy 00:01:45.190 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:45.190 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:45.190 ++ RUN_NIGHTLY=1 00:01:45.190 + case $SPDK_TEST_NVMF_NICS in 00:01:45.190 + DRIVERS=mlx5_ib 00:01:45.190 + [[ -n mlx5_ib ]] 00:01:45.190 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:45.190 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:51.763 rmmod: ERROR: Module irdma is not currently loaded 00:01:51.763 rmmod: ERROR: Module i40iw is not currently loaded 00:01:51.763 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:51.763 + true 00:01:51.763 + for D in $DRIVERS 00:01:51.763 + sudo modprobe mlx5_ib 00:01:51.763 + exit 0 00:01:51.775 [Pipeline] } 00:01:51.789 [Pipeline] // withEnv 00:01:51.794 [Pipeline] } 00:01:51.808 [Pipeline] // stage 00:01:51.818 [Pipeline] catchError 00:01:51.820 [Pipeline] { 00:01:51.834 [Pipeline] timeout 00:01:51.835 Timeout set to expire in 1 hr 0 min 00:01:51.837 [Pipeline] { 00:01:51.851 [Pipeline] stage 00:01:51.853 [Pipeline] { (Tests) 00:01:51.870 [Pipeline] sh 00:01:52.158 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-phy-autotest 00:01:52.158 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest 00:01:52.158 + DIR_ROOT=/var/jenkins/workspace/nvmf-phy-autotest 00:01:52.158 + [[ -n /var/jenkins/workspace/nvmf-phy-autotest ]] 00:01:52.158 + DIR_SPDK=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:52.158 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-phy-autotest/output 00:01:52.158 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/spdk ]] 00:01:52.158 + [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:52.158 + mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/output 00:01:52.158 + [[ -d /var/jenkins/workspace/nvmf-phy-autotest/output ]] 00:01:52.158 + [[ nvmf-phy-autotest == pkgdep-* ]] 00:01:52.158 + cd /var/jenkins/workspace/nvmf-phy-autotest 00:01:52.158 + source /etc/os-release 00:01:52.158 ++ NAME='Fedora Linux' 00:01:52.158 ++ VERSION='39 (Cloud Edition)' 00:01:52.158 ++ ID=fedora 00:01:52.158 ++ VERSION_ID=39 00:01:52.158 ++ VERSION_CODENAME= 00:01:52.158 ++ PLATFORM_ID=platform:f39 00:01:52.158 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:52.158 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.158 ++ LOGO=fedora-logo-icon 00:01:52.158 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:52.158 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.158 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:52.158 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.158 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.158 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.158 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:52.158 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.158 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:52.158 ++ SUPPORT_END=2024-11-12 00:01:52.158 ++ VARIANT='Cloud Edition' 00:01:52.158 ++ VARIANT_ID=cloud 00:01:52.158 + uname -a 00:01:52.158 Linux spdk-wfp-21 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:52.158 + sudo /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:01:55.453 Hugepages 00:01:55.453 node hugesize free / total 00:01:55.453 node0 1048576kB 0 / 0 00:01:55.453 node0 2048kB 0 / 0 00:01:55.453 node1 1048576kB 0 / 0 00:01:55.453 node1 2048kB 0 / 0 00:01:55.453 00:01:55.453 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:55.453 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:55.453 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:55.453 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:55.453 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:55.453 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:55.453 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:55.453 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:55.453 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:55.453 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:55.453 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:55.453 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:55.453 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:55.453 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:55.453 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:55.453 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:55.453 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:55.453 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:55.453 + rm -f /tmp/spdk-ld-path 00:01:55.453 + source autorun-spdk.conf 00:01:55.453 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.453 ++ SPDK_TEST_NVMF=1 00:01:55.453 ++ SPDK_TEST_NVME_CLI=1 00:01:55.453 ++ SPDK_TEST_NVMF_NICS=mlx5 00:01:55.453 ++ SPDK_RUN_UBSAN=1 00:01:55.453 ++ NET_TYPE=phy 00:01:55.453 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:55.453 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:55.453 ++ RUN_NIGHTLY=1 00:01:55.453 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.453 + [[ -n '' ]] 00:01:55.453 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:55.453 + for M in /var/spdk/build-*-manifest.txt 00:01:55.453 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:55.453 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:55.453 + for M in /var/spdk/build-*-manifest.txt 00:01:55.453 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.453 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:55.453 + for M in /var/spdk/build-*-manifest.txt 00:01:55.453 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.453 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-phy-autotest/output/ 00:01:55.453 ++ uname 00:01:55.453 + [[ Linux == \L\i\n\u\x ]] 00:01:55.453 + sudo dmesg -T 00:01:55.453 + sudo dmesg --clear 00:01:55.453 + dmesg_pid=1069700 00:01:55.453 + [[ Fedora Linux == FreeBSD ]] 00:01:55.453 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.453 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.453 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.453 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.453 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.453 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.453 + sudo dmesg -Tw 00:01:55.453 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.453 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.453 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.453 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.453 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.453 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.453 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.453 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.453 + spdk/autorun.sh /var/jenkins/workspace/nvmf-phy-autotest/autorun-spdk.conf 00:01:55.453 Test configuration: 00:01:55.453 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.453 SPDK_TEST_NVMF=1 00:01:55.453 SPDK_TEST_NVME_CLI=1 00:01:55.453 SPDK_TEST_NVMF_NICS=mlx5 00:01:55.453 SPDK_RUN_UBSAN=1 00:01:55.453 NET_TYPE=phy 00:01:55.453 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:01:55.453 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:55.453 RUN_NIGHTLY=1 15:52:26 -- common/autotest_common.sh@1689 -- $ [[ n == y ]] 00:01:55.453 15:52:26 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:01:55.453 15:52:26 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.453 15:52:26 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.453 15:52:26 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.453 15:52:26 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.453 15:52:26 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.453 15:52:26 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.453 15:52:26 -- paths/export.sh@5 -- $ export PATH 00:01:55.453 15:52:26 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.453 15:52:26 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:01:55.453 15:52:26 -- common/autobuild_common.sh@440 -- $ date +%s 00:01:55.453 15:52:26 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732114346.XXXXXX 00:01:55.453 15:52:26 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732114346.Wqba9j 00:01:55.453 15:52:26 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:01:55.453 15:52:26 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:01:55.453 15:52:26 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:55.453 15:52:26 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:01:55.453 15:52:26 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:55.453 15:52:26 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.453 15:52:26 -- common/autobuild_common.sh@456 -- $ get_config_params 00:01:55.453 15:52:26 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:01:55.453 15:52:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.453 15:52:26 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:01:55.453 15:52:26 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:55.453 15:52:26 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:55.453 15:52:26 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:55.453 15:52:26 -- spdk/autobuild.sh@16 -- $ date -u 00:01:55.453 Wed Nov 20 02:52:26 PM UTC 2024 00:01:55.453 15:52:26 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:55.453 LTS-67-gc13c99a5e 00:01:55.453 15:52:26 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:55.453 15:52:26 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:55.453 15:52:26 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:55.453 15:52:26 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:01:55.453 15:52:26 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:55.453 15:52:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.454 ************************************ 00:01:55.454 START TEST ubsan 00:01:55.454 ************************************ 00:01:55.454 15:52:26 -- common/autotest_common.sh@1114 -- $ echo 'using ubsan' 00:01:55.454 using ubsan 00:01:55.454 00:01:55.454 real 0m0.000s 00:01:55.454 user 0m0.000s 00:01:55.454 sys 0m0.000s 00:01:55.454 15:52:26 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:01:55.454 15:52:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.454 ************************************ 00:01:55.454 END TEST ubsan 00:01:55.454 ************************************ 00:01:55.454 15:52:26 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:01:55.454 15:52:26 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:55.454 15:52:26 -- common/autobuild_common.sh@432 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:55.454 15:52:26 -- common/autotest_common.sh@1087 -- $ '[' 2 -le 1 ']' 00:01:55.454 15:52:26 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:01:55.454 15:52:26 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.713 ************************************ 00:01:55.713 START TEST build_native_dpdk 00:01:55.713 ************************************ 00:01:55.713 15:52:26 -- common/autotest_common.sh@1114 -- $ _build_native_dpdk 00:01:55.713 15:52:26 -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:55.713 15:52:26 -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:55.713 15:52:26 -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:55.713 15:52:26 -- common/autobuild_common.sh@51 -- $ local compiler 00:01:55.713 15:52:26 -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:55.713 15:52:26 -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:55.713 15:52:26 -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:55.713 15:52:26 -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:55.713 15:52:26 -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:55.713 15:52:26 -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:55.713 15:52:26 -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:55.713 15:52:26 -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:55.713 15:52:26 -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:55.713 15:52:26 -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:55.714 15:52:26 -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:55.714 15:52:26 -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:01:55.714 15:52:26 -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:55.714 15:52:26 -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-phy-autotest/dpdk ]] 00:01:55.714 15:52:26 -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:01:55.714 15:52:26 -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk log --oneline -n 5 00:01:55.714 caf0f5d395 version: 22.11.4 00:01:55.714 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:01:55.714 dc9c799c7d vhost: fix missing spinlock unlock 00:01:55.714 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:01:55.714 6ef77f2a5e net/gve: fix RX buffer size alignment 00:01:55.714 15:52:26 -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:55.714 15:52:26 -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:55.714 15:52:26 -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:01:55.714 15:52:26 -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:55.714 15:52:26 -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:55.714 15:52:26 -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:55.714 15:52:26 -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:55.714 15:52:26 -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:55.714 15:52:26 -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:55.714 15:52:26 -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:55.714 15:52:26 -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:55.714 15:52:26 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:55.714 15:52:26 -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:55.714 15:52:26 -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:55.714 15:52:26 -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:01:55.714 15:52:26 -- common/autobuild_common.sh@168 -- $ uname -s 00:01:55.714 15:52:26 -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:55.714 15:52:26 -- common/autobuild_common.sh@169 -- $ lt 22.11.4 21.11.0 00:01:55.714 15:52:26 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:01:55.714 15:52:26 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:55.714 15:52:26 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:55.714 15:52:26 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:55.714 15:52:26 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:55.714 15:52:26 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:55.714 15:52:26 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:55.714 15:52:26 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:55.714 15:52:26 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:55.714 15:52:26 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:55.714 15:52:26 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:55.714 15:52:26 -- scripts/common.sh@343 -- $ case "$op" in 00:01:55.714 15:52:26 -- scripts/common.sh@344 -- $ : 1 00:01:55.714 15:52:26 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:55.714 15:52:26 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:55.714 15:52:26 -- scripts/common.sh@364 -- $ decimal 22 00:01:55.714 15:52:26 -- scripts/common.sh@352 -- $ local d=22 00:01:55.714 15:52:26 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:55.714 15:52:26 -- scripts/common.sh@354 -- $ echo 22 00:01:55.714 15:52:26 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:55.714 15:52:26 -- scripts/common.sh@365 -- $ decimal 21 00:01:55.714 15:52:26 -- scripts/common.sh@352 -- $ local d=21 00:01:55.714 15:52:26 -- scripts/common.sh@353 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:55.714 15:52:26 -- scripts/common.sh@354 -- $ echo 21 00:01:55.714 15:52:26 -- scripts/common.sh@365 -- $ ver2[v]=21 00:01:55.714 15:52:26 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:55.714 15:52:26 -- scripts/common.sh@366 -- $ return 1 00:01:55.714 15:52:26 -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:55.714 patching file config/rte_config.h 00:01:55.714 Hunk #1 succeeded at 60 (offset 1 line). 00:01:55.714 15:52:26 -- common/autobuild_common.sh@176 -- $ lt 22.11.4 24.07.0 00:01:55.714 15:52:26 -- scripts/common.sh@372 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:01:55.714 15:52:26 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:01:55.714 15:52:26 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:01:55.714 15:52:26 -- scripts/common.sh@335 -- $ IFS=.-: 00:01:55.714 15:52:26 -- scripts/common.sh@335 -- $ read -ra ver1 00:01:55.714 15:52:26 -- scripts/common.sh@336 -- $ IFS=.-: 00:01:55.714 15:52:26 -- scripts/common.sh@336 -- $ read -ra ver2 00:01:55.714 15:52:26 -- scripts/common.sh@337 -- $ local 'op=<' 00:01:55.714 15:52:26 -- scripts/common.sh@339 -- $ ver1_l=3 00:01:55.714 15:52:26 -- scripts/common.sh@340 -- $ ver2_l=3 00:01:55.714 15:52:26 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:01:55.714 15:52:26 -- scripts/common.sh@343 -- $ case "$op" in 00:01:55.714 15:52:26 -- scripts/common.sh@344 -- $ : 1 00:01:55.714 15:52:26 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:01:55.714 15:52:26 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:55.714 15:52:26 -- scripts/common.sh@364 -- $ decimal 22 00:01:55.714 15:52:26 -- scripts/common.sh@352 -- $ local d=22 00:01:55.714 15:52:26 -- scripts/common.sh@353 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:01:55.714 15:52:26 -- scripts/common.sh@354 -- $ echo 22 00:01:55.714 15:52:26 -- scripts/common.sh@364 -- $ ver1[v]=22 00:01:55.714 15:52:26 -- scripts/common.sh@365 -- $ decimal 24 00:01:55.714 15:52:26 -- scripts/common.sh@352 -- $ local d=24 00:01:55.714 15:52:26 -- scripts/common.sh@353 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:01:55.714 15:52:26 -- scripts/common.sh@354 -- $ echo 24 00:01:55.714 15:52:26 -- scripts/common.sh@365 -- $ ver2[v]=24 00:01:55.714 15:52:26 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:01:55.714 15:52:26 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:01:55.714 15:52:26 -- scripts/common.sh@367 -- $ return 0 00:01:55.714 15:52:26 -- common/autobuild_common.sh@177 -- $ patch -p1 00:01:55.714 patching file lib/pcapng/rte_pcapng.c 00:01:55.714 Hunk #1 succeeded at 110 (offset -18 lines). 00:01:55.714 15:52:26 -- common/autobuild_common.sh@180 -- $ dpdk_kmods=false 00:01:55.714 15:52:26 -- common/autobuild_common.sh@181 -- $ uname -s 00:01:55.714 15:52:26 -- common/autobuild_common.sh@181 -- $ '[' Linux = FreeBSD ']' 00:01:55.714 15:52:26 -- common/autobuild_common.sh@185 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:55.714 15:52:26 -- common/autobuild_common.sh@185 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:00.993 The Meson build system 00:02:00.993 Version: 1.5.0 00:02:00.993 Source dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk 00:02:00.993 Build dir: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp 00:02:00.993 Build type: native build 00:02:00.993 Program cat found: YES (/usr/bin/cat) 00:02:00.993 Project name: DPDK 00:02:00.993 Project version: 22.11.4 00:02:00.993 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:00.993 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:00.993 Host machine cpu family: x86_64 00:02:00.993 Host machine cpu: x86_64 00:02:00.993 Message: ## Building in Developer Mode ## 00:02:00.993 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:00.993 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:00.993 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:00.993 Program objdump found: YES (/usr/bin/objdump) 00:02:00.993 Program python3 found: YES (/usr/bin/python3) 00:02:00.993 Program cat found: YES (/usr/bin/cat) 00:02:00.993 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:00.993 Checking for size of "void *" : 8 00:02:00.993 Checking for size of "void *" : 8 (cached) 00:02:00.993 Library m found: YES 00:02:00.993 Library numa found: YES 00:02:00.993 Has header "numaif.h" : YES 00:02:00.993 Library fdt found: NO 00:02:00.993 Library execinfo found: NO 00:02:00.993 Has header "execinfo.h" : YES 00:02:00.993 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:00.993 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:00.993 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:00.993 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:00.993 Run-time dependency openssl found: YES 3.1.1 00:02:00.993 Run-time dependency libpcap found: YES 1.10.4 00:02:00.993 Has header "pcap.h" with dependency libpcap: YES 00:02:00.993 Compiler for C supports arguments -Wcast-qual: YES 00:02:00.993 Compiler for C supports arguments -Wdeprecated: YES 00:02:00.993 Compiler for C supports arguments -Wformat: YES 00:02:00.993 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:00.993 Compiler for C supports arguments -Wformat-security: NO 00:02:00.993 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:00.993 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:00.993 Compiler for C supports arguments -Wnested-externs: YES 00:02:00.993 Compiler for C supports arguments -Wold-style-definition: YES 00:02:00.993 Compiler for C supports arguments -Wpointer-arith: YES 00:02:00.993 Compiler for C supports arguments -Wsign-compare: YES 00:02:00.993 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:00.993 Compiler for C supports arguments -Wundef: YES 00:02:00.993 Compiler for C supports arguments -Wwrite-strings: YES 00:02:00.993 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:00.993 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:00.993 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:00.993 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:00.993 Compiler for C supports arguments -mavx512f: YES 00:02:00.993 Checking if "AVX512 checking" compiles: YES 00:02:00.993 Fetching value of define "__SSE4_2__" : 1 00:02:00.993 Fetching value of define "__AES__" : 1 00:02:00.993 Fetching value of define "__AVX__" : 1 00:02:00.993 Fetching value of define "__AVX2__" : 1 00:02:00.993 Fetching value of define "__AVX512BW__" : 1 00:02:00.993 Fetching value of define "__AVX512CD__" : 1 00:02:00.993 Fetching value of define "__AVX512DQ__" : 1 00:02:00.993 Fetching value of define "__AVX512F__" : 1 00:02:00.993 Fetching value of define "__AVX512VL__" : 1 00:02:00.993 Fetching value of define "__PCLMUL__" : 1 00:02:00.993 Fetching value of define "__RDRND__" : 1 00:02:00.993 Fetching value of define "__RDSEED__" : 1 00:02:00.993 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:00.994 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:00.994 Message: lib/kvargs: Defining dependency "kvargs" 00:02:00.994 Message: lib/telemetry: Defining dependency "telemetry" 00:02:00.994 Checking for function "getentropy" : YES 00:02:00.994 Message: lib/eal: Defining dependency "eal" 00:02:00.994 Message: lib/ring: Defining dependency "ring" 00:02:00.994 Message: lib/rcu: Defining dependency "rcu" 00:02:00.994 Message: lib/mempool: Defining dependency "mempool" 00:02:00.994 Message: lib/mbuf: Defining dependency "mbuf" 00:02:00.994 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.994 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:00.994 Compiler for C supports arguments -mpclmul: YES 00:02:00.994 Compiler for C supports arguments -maes: YES 00:02:00.994 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:00.994 Compiler for C supports arguments -mavx512bw: YES 00:02:00.994 Compiler for C supports arguments -mavx512dq: YES 00:02:00.994 Compiler for C supports arguments -mavx512vl: YES 00:02:00.994 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:00.994 Compiler for C supports arguments -mavx2: YES 00:02:00.994 Compiler for C supports arguments -mavx: YES 00:02:00.994 Message: lib/net: Defining dependency "net" 00:02:00.994 Message: lib/meter: Defining dependency "meter" 00:02:00.994 Message: lib/ethdev: Defining dependency "ethdev" 00:02:00.994 Message: lib/pci: Defining dependency "pci" 00:02:00.994 Message: lib/cmdline: Defining dependency "cmdline" 00:02:00.994 Message: lib/metrics: Defining dependency "metrics" 00:02:00.994 Message: lib/hash: Defining dependency "hash" 00:02:00.994 Message: lib/timer: Defining dependency "timer" 00:02:00.994 Fetching value of define "__AVX2__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.994 Message: lib/acl: Defining dependency "acl" 00:02:00.994 Message: lib/bbdev: Defining dependency "bbdev" 00:02:00.994 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:00.994 Run-time dependency libelf found: YES 0.191 00:02:00.994 Message: lib/bpf: Defining dependency "bpf" 00:02:00.994 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:00.994 Message: lib/compressdev: Defining dependency "compressdev" 00:02:00.994 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:00.994 Message: lib/distributor: Defining dependency "distributor" 00:02:00.994 Message: lib/efd: Defining dependency "efd" 00:02:00.994 Message: lib/eventdev: Defining dependency "eventdev" 00:02:00.994 Message: lib/gpudev: Defining dependency "gpudev" 00:02:00.994 Message: lib/gro: Defining dependency "gro" 00:02:00.994 Message: lib/gso: Defining dependency "gso" 00:02:00.994 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:00.994 Message: lib/jobstats: Defining dependency "jobstats" 00:02:00.994 Message: lib/latencystats: Defining dependency "latencystats" 00:02:00.994 Message: lib/lpm: Defining dependency "lpm" 00:02:00.994 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:00.994 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:00.994 Message: lib/member: Defining dependency "member" 00:02:00.994 Message: lib/pcapng: Defining dependency "pcapng" 00:02:00.994 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:00.994 Message: lib/power: Defining dependency "power" 00:02:00.994 Message: lib/rawdev: Defining dependency "rawdev" 00:02:00.994 Message: lib/regexdev: Defining dependency "regexdev" 00:02:00.994 Message: lib/dmadev: Defining dependency "dmadev" 00:02:00.994 Message: lib/rib: Defining dependency "rib" 00:02:00.994 Message: lib/reorder: Defining dependency "reorder" 00:02:00.994 Message: lib/sched: Defining dependency "sched" 00:02:00.994 Message: lib/security: Defining dependency "security" 00:02:00.994 Message: lib/stack: Defining dependency "stack" 00:02:00.994 Has header "linux/userfaultfd.h" : YES 00:02:00.994 Message: lib/vhost: Defining dependency "vhost" 00:02:00.994 Message: lib/ipsec: Defining dependency "ipsec" 00:02:00.994 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:00.994 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:00.994 Message: lib/fib: Defining dependency "fib" 00:02:00.994 Message: lib/port: Defining dependency "port" 00:02:00.994 Message: lib/pdump: Defining dependency "pdump" 00:02:00.994 Message: lib/table: Defining dependency "table" 00:02:00.994 Message: lib/pipeline: Defining dependency "pipeline" 00:02:00.994 Message: lib/graph: Defining dependency "graph" 00:02:00.994 Message: lib/node: Defining dependency "node" 00:02:00.994 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:00.994 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:00.994 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:00.994 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:00.994 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:00.994 Compiler for C supports arguments -Wno-unused-value: YES 00:02:00.994 Compiler for C supports arguments -Wno-format: YES 00:02:00.994 Compiler for C supports arguments -Wno-format-security: YES 00:02:00.994 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:01.564 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:01.564 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:01.564 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:01.564 Fetching value of define "__AVX2__" : 1 (cached) 00:02:01.564 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:01.564 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:01.564 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:01.564 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:01.564 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:01.564 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:01.564 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:01.564 Configuring doxy-api.conf using configuration 00:02:01.564 Program sphinx-build found: NO 00:02:01.564 Configuring rte_build_config.h using configuration 00:02:01.564 Message: 00:02:01.564 ================= 00:02:01.564 Applications Enabled 00:02:01.564 ================= 00:02:01.564 00:02:01.564 apps: 00:02:01.564 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:01.564 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:01.564 test-security-perf, 00:02:01.564 00:02:01.564 Message: 00:02:01.564 ================= 00:02:01.564 Libraries Enabled 00:02:01.564 ================= 00:02:01.564 00:02:01.564 libs: 00:02:01.564 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:01.564 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:01.564 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:01.564 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:01.564 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:01.564 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:01.564 table, pipeline, graph, node, 00:02:01.564 00:02:01.564 Message: 00:02:01.564 =============== 00:02:01.564 Drivers Enabled 00:02:01.564 =============== 00:02:01.564 00:02:01.564 common: 00:02:01.564 00:02:01.564 bus: 00:02:01.564 pci, vdev, 00:02:01.564 mempool: 00:02:01.564 ring, 00:02:01.564 dma: 00:02:01.564 00:02:01.564 net: 00:02:01.564 i40e, 00:02:01.564 raw: 00:02:01.564 00:02:01.564 crypto: 00:02:01.564 00:02:01.564 compress: 00:02:01.564 00:02:01.564 regex: 00:02:01.564 00:02:01.564 vdpa: 00:02:01.564 00:02:01.564 event: 00:02:01.564 00:02:01.564 baseband: 00:02:01.564 00:02:01.564 gpu: 00:02:01.564 00:02:01.564 00:02:01.564 Message: 00:02:01.564 ================= 00:02:01.564 Content Skipped 00:02:01.564 ================= 00:02:01.564 00:02:01.564 apps: 00:02:01.564 00:02:01.564 libs: 00:02:01.564 kni: explicitly disabled via build config (deprecated lib) 00:02:01.564 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:01.564 00:02:01.564 drivers: 00:02:01.564 common/cpt: not in enabled drivers build config 00:02:01.564 common/dpaax: not in enabled drivers build config 00:02:01.564 common/iavf: not in enabled drivers build config 00:02:01.564 common/idpf: not in enabled drivers build config 00:02:01.564 common/mvep: not in enabled drivers build config 00:02:01.564 common/octeontx: not in enabled drivers build config 00:02:01.564 bus/auxiliary: not in enabled drivers build config 00:02:01.564 bus/dpaa: not in enabled drivers build config 00:02:01.564 bus/fslmc: not in enabled drivers build config 00:02:01.564 bus/ifpga: not in enabled drivers build config 00:02:01.564 bus/vmbus: not in enabled drivers build config 00:02:01.564 common/cnxk: not in enabled drivers build config 00:02:01.564 common/mlx5: not in enabled drivers build config 00:02:01.564 common/qat: not in enabled drivers build config 00:02:01.564 common/sfc_efx: not in enabled drivers build config 00:02:01.564 mempool/bucket: not in enabled drivers build config 00:02:01.564 mempool/cnxk: not in enabled drivers build config 00:02:01.564 mempool/dpaa: not in enabled drivers build config 00:02:01.564 mempool/dpaa2: not in enabled drivers build config 00:02:01.564 mempool/octeontx: not in enabled drivers build config 00:02:01.564 mempool/stack: not in enabled drivers build config 00:02:01.564 dma/cnxk: not in enabled drivers build config 00:02:01.564 dma/dpaa: not in enabled drivers build config 00:02:01.564 dma/dpaa2: not in enabled drivers build config 00:02:01.564 dma/hisilicon: not in enabled drivers build config 00:02:01.564 dma/idxd: not in enabled drivers build config 00:02:01.564 dma/ioat: not in enabled drivers build config 00:02:01.564 dma/skeleton: not in enabled drivers build config 00:02:01.564 net/af_packet: not in enabled drivers build config 00:02:01.564 net/af_xdp: not in enabled drivers build config 00:02:01.564 net/ark: not in enabled drivers build config 00:02:01.564 net/atlantic: not in enabled drivers build config 00:02:01.564 net/avp: not in enabled drivers build config 00:02:01.564 net/axgbe: not in enabled drivers build config 00:02:01.564 net/bnx2x: not in enabled drivers build config 00:02:01.564 net/bnxt: not in enabled drivers build config 00:02:01.564 net/bonding: not in enabled drivers build config 00:02:01.564 net/cnxk: not in enabled drivers build config 00:02:01.564 net/cxgbe: not in enabled drivers build config 00:02:01.564 net/dpaa: not in enabled drivers build config 00:02:01.564 net/dpaa2: not in enabled drivers build config 00:02:01.564 net/e1000: not in enabled drivers build config 00:02:01.564 net/ena: not in enabled drivers build config 00:02:01.564 net/enetc: not in enabled drivers build config 00:02:01.564 net/enetfec: not in enabled drivers build config 00:02:01.564 net/enic: not in enabled drivers build config 00:02:01.564 net/failsafe: not in enabled drivers build config 00:02:01.564 net/fm10k: not in enabled drivers build config 00:02:01.564 net/gve: not in enabled drivers build config 00:02:01.564 net/hinic: not in enabled drivers build config 00:02:01.564 net/hns3: not in enabled drivers build config 00:02:01.564 net/iavf: not in enabled drivers build config 00:02:01.564 net/ice: not in enabled drivers build config 00:02:01.564 net/idpf: not in enabled drivers build config 00:02:01.564 net/igc: not in enabled drivers build config 00:02:01.564 net/ionic: not in enabled drivers build config 00:02:01.564 net/ipn3ke: not in enabled drivers build config 00:02:01.564 net/ixgbe: not in enabled drivers build config 00:02:01.564 net/kni: not in enabled drivers build config 00:02:01.564 net/liquidio: not in enabled drivers build config 00:02:01.564 net/mana: not in enabled drivers build config 00:02:01.564 net/memif: not in enabled drivers build config 00:02:01.564 net/mlx4: not in enabled drivers build config 00:02:01.564 net/mlx5: not in enabled drivers build config 00:02:01.564 net/mvneta: not in enabled drivers build config 00:02:01.564 net/mvpp2: not in enabled drivers build config 00:02:01.564 net/netvsc: not in enabled drivers build config 00:02:01.564 net/nfb: not in enabled drivers build config 00:02:01.564 net/nfp: not in enabled drivers build config 00:02:01.564 net/ngbe: not in enabled drivers build config 00:02:01.564 net/null: not in enabled drivers build config 00:02:01.564 net/octeontx: not in enabled drivers build config 00:02:01.564 net/octeon_ep: not in enabled drivers build config 00:02:01.564 net/pcap: not in enabled drivers build config 00:02:01.564 net/pfe: not in enabled drivers build config 00:02:01.564 net/qede: not in enabled drivers build config 00:02:01.565 net/ring: not in enabled drivers build config 00:02:01.565 net/sfc: not in enabled drivers build config 00:02:01.565 net/softnic: not in enabled drivers build config 00:02:01.565 net/tap: not in enabled drivers build config 00:02:01.565 net/thunderx: not in enabled drivers build config 00:02:01.565 net/txgbe: not in enabled drivers build config 00:02:01.565 net/vdev_netvsc: not in enabled drivers build config 00:02:01.565 net/vhost: not in enabled drivers build config 00:02:01.565 net/virtio: not in enabled drivers build config 00:02:01.565 net/vmxnet3: not in enabled drivers build config 00:02:01.565 raw/cnxk_bphy: not in enabled drivers build config 00:02:01.565 raw/cnxk_gpio: not in enabled drivers build config 00:02:01.565 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:01.565 raw/ifpga: not in enabled drivers build config 00:02:01.565 raw/ntb: not in enabled drivers build config 00:02:01.565 raw/skeleton: not in enabled drivers build config 00:02:01.565 crypto/armv8: not in enabled drivers build config 00:02:01.565 crypto/bcmfs: not in enabled drivers build config 00:02:01.565 crypto/caam_jr: not in enabled drivers build config 00:02:01.565 crypto/ccp: not in enabled drivers build config 00:02:01.565 crypto/cnxk: not in enabled drivers build config 00:02:01.565 crypto/dpaa_sec: not in enabled drivers build config 00:02:01.565 crypto/dpaa2_sec: not in enabled drivers build config 00:02:01.565 crypto/ipsec_mb: not in enabled drivers build config 00:02:01.565 crypto/mlx5: not in enabled drivers build config 00:02:01.565 crypto/mvsam: not in enabled drivers build config 00:02:01.565 crypto/nitrox: not in enabled drivers build config 00:02:01.565 crypto/null: not in enabled drivers build config 00:02:01.565 crypto/octeontx: not in enabled drivers build config 00:02:01.565 crypto/openssl: not in enabled drivers build config 00:02:01.565 crypto/scheduler: not in enabled drivers build config 00:02:01.565 crypto/uadk: not in enabled drivers build config 00:02:01.565 crypto/virtio: not in enabled drivers build config 00:02:01.565 compress/isal: not in enabled drivers build config 00:02:01.565 compress/mlx5: not in enabled drivers build config 00:02:01.565 compress/octeontx: not in enabled drivers build config 00:02:01.565 compress/zlib: not in enabled drivers build config 00:02:01.565 regex/mlx5: not in enabled drivers build config 00:02:01.565 regex/cn9k: not in enabled drivers build config 00:02:01.565 vdpa/ifc: not in enabled drivers build config 00:02:01.565 vdpa/mlx5: not in enabled drivers build config 00:02:01.565 vdpa/sfc: not in enabled drivers build config 00:02:01.565 event/cnxk: not in enabled drivers build config 00:02:01.565 event/dlb2: not in enabled drivers build config 00:02:01.565 event/dpaa: not in enabled drivers build config 00:02:01.565 event/dpaa2: not in enabled drivers build config 00:02:01.565 event/dsw: not in enabled drivers build config 00:02:01.565 event/opdl: not in enabled drivers build config 00:02:01.565 event/skeleton: not in enabled drivers build config 00:02:01.565 event/sw: not in enabled drivers build config 00:02:01.565 event/octeontx: not in enabled drivers build config 00:02:01.565 baseband/acc: not in enabled drivers build config 00:02:01.565 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:01.565 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:01.565 baseband/la12xx: not in enabled drivers build config 00:02:01.565 baseband/null: not in enabled drivers build config 00:02:01.565 baseband/turbo_sw: not in enabled drivers build config 00:02:01.565 gpu/cuda: not in enabled drivers build config 00:02:01.565 00:02:01.565 00:02:01.565 Build targets in project: 311 00:02:01.565 00:02:01.565 DPDK 22.11.4 00:02:01.565 00:02:01.565 User defined options 00:02:01.565 libdir : lib 00:02:01.565 prefix : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:02:01.565 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:01.565 c_link_args : 00:02:01.565 enable_docs : false 00:02:01.565 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:02:01.565 enable_kmods : false 00:02:01.565 machine : native 00:02:01.565 tests : false 00:02:01.565 00:02:01.565 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:01.565 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:01.565 15:52:32 -- common/autobuild_common.sh@189 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 00:02:01.565 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:01.825 [1/740] Generating lib/rte_kvargs_def with a custom command 00:02:01.825 [2/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:01.825 [3/740] Generating lib/rte_telemetry_def with a custom command 00:02:01.825 [4/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:01.825 [5/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:01.825 [6/740] Generating lib/rte_eal_mingw with a custom command 00:02:01.825 [7/740] Generating lib/rte_ring_mingw with a custom command 00:02:01.825 [8/740] Generating lib/rte_rcu_def with a custom command 00:02:01.825 [9/740] Generating lib/rte_rcu_mingw with a custom command 00:02:01.825 [10/740] Generating lib/rte_mempool_def with a custom command 00:02:01.825 [11/740] Generating lib/rte_mempool_mingw with a custom command 00:02:01.825 [12/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:01.825 [13/740] Generating lib/rte_eal_def with a custom command 00:02:01.825 [14/740] Generating lib/rte_meter_def with a custom command 00:02:01.825 [15/740] Generating lib/rte_ring_def with a custom command 00:02:01.825 [16/740] Generating lib/rte_net_def with a custom command 00:02:01.825 [17/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:01.825 [18/740] Generating lib/rte_mbuf_def with a custom command 00:02:01.825 [19/740] Generating lib/rte_net_mingw with a custom command 00:02:01.825 [20/740] Generating lib/rte_meter_mingw with a custom command 00:02:01.825 [21/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:01.825 [22/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:01.825 [23/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:01.825 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:01.825 [25/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:01.825 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:01.825 [27/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:01.825 [28/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:01.825 [29/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:01.825 [30/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:01.825 [31/740] Generating lib/rte_ethdev_def with a custom command 00:02:01.825 [32/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:01.825 [33/740] Generating lib/rte_pci_mingw with a custom command 00:02:01.825 [34/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:01.825 [35/740] Generating lib/rte_pci_def with a custom command 00:02:01.825 [36/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:01.825 [37/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:01.825 [38/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:01.825 [39/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:01.825 [40/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:01.826 [41/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:01.826 [42/740] Linking static target lib/librte_kvargs.a 00:02:01.826 [43/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:01.826 [44/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:01.826 [45/740] Generating lib/rte_metrics_mingw with a custom command 00:02:01.826 [46/740] Generating lib/rte_metrics_def with a custom command 00:02:01.826 [47/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:01.826 [48/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:01.826 [49/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:01.826 [50/740] Generating lib/rte_cmdline_def with a custom command 00:02:01.826 [51/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:02.087 [52/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:02.087 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:02.087 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:02.087 [55/740] Generating lib/rte_hash_def with a custom command 00:02:02.087 [56/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:02.087 [57/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:02.087 [58/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:02.087 [59/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:02.087 [60/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:02.087 [61/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:02.087 [62/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:02.087 [63/740] Generating lib/rte_hash_mingw with a custom command 00:02:02.087 [64/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:02.087 [65/740] Generating lib/rte_timer_def with a custom command 00:02:02.087 [66/740] Generating lib/rte_timer_mingw with a custom command 00:02:02.087 [67/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:02.087 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:02.087 [69/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:02.087 [70/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:02.087 [71/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:02.087 [72/740] Generating lib/rte_acl_def with a custom command 00:02:02.087 [73/740] Generating lib/rte_acl_mingw with a custom command 00:02:02.087 [74/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:02.087 [75/740] Generating lib/rte_bbdev_def with a custom command 00:02:02.087 [76/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:02.087 [77/740] Generating lib/rte_bitratestats_def with a custom command 00:02:02.087 [78/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:02.087 [79/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:02.087 [80/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:02.087 [81/740] Linking static target lib/librte_pci.a 00:02:02.087 [82/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:02.087 [83/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:02.087 [84/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:02.087 [85/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:02.087 [86/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:02.087 [87/740] Generating lib/rte_bpf_def with a custom command 00:02:02.087 [88/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:02.087 [89/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:02.087 [90/740] Generating lib/rte_cfgfile_def with a custom command 00:02:02.087 [91/740] Linking static target lib/librte_meter.a 00:02:02.087 [92/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:02.087 [93/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:02.087 [94/740] Generating lib/rte_bpf_mingw with a custom command 00:02:02.087 [95/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:02.087 [96/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:02.087 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:02.087 [98/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:02.087 [99/740] Generating lib/rte_compressdev_def with a custom command 00:02:02.087 [100/740] Linking static target lib/librte_ring.a 00:02:02.087 [101/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:02.087 [102/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:02.087 [103/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:02.087 [104/740] Generating lib/rte_cryptodev_def with a custom command 00:02:02.087 [105/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:02.087 [106/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:02.087 [107/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:02.087 [108/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:02.087 [109/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:02.087 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:02.087 [111/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:02.087 [112/740] Generating lib/rte_distributor_def with a custom command 00:02:02.087 [113/740] Generating lib/rte_distributor_mingw with a custom command 00:02:02.087 [114/740] Generating lib/rte_efd_mingw with a custom command 00:02:02.087 [115/740] Generating lib/rte_efd_def with a custom command 00:02:02.087 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:02.087 [117/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:02.087 [118/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:02.087 [119/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:02.087 [120/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:02.087 [121/740] Generating lib/rte_eventdev_def with a custom command 00:02:02.087 [122/740] Generating lib/rte_gpudev_def with a custom command 00:02:02.087 [123/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:02.087 [124/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:02.087 [125/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:02.087 [126/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:02.087 [127/740] Generating lib/rte_gro_def with a custom command 00:02:02.087 [128/740] Generating lib/rte_gro_mingw with a custom command 00:02:02.355 [129/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:02.355 [130/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:02.355 [131/740] Generating lib/rte_gso_def with a custom command 00:02:02.355 [132/740] Generating lib/rte_gso_mingw with a custom command 00:02:02.355 [133/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:02.355 [134/740] Generating lib/rte_ip_frag_def with a custom command 00:02:02.355 [135/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:02.355 [136/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.355 [137/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.355 [138/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:02.355 [139/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:02.355 [140/740] Generating lib/rte_jobstats_def with a custom command 00:02:02.355 [141/740] Linking target lib/librte_kvargs.so.23.0 00:02:02.355 [142/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:02.355 [143/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:02.355 [144/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:02.355 [145/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:02.355 [146/740] Linking static target lib/librte_cfgfile.a 00:02:02.355 [147/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:02.355 [148/740] Generating lib/rte_latencystats_def with a custom command 00:02:02.355 [149/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:02.355 [150/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:02.355 [151/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:02.355 [152/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.613 [153/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:02.613 [154/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:02.613 [155/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:02.613 [156/740] Generating lib/rte_lpm_mingw with a custom command 00:02:02.613 [157/740] Generating lib/rte_lpm_def with a custom command 00:02:02.613 [158/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:02.613 [159/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:02.613 [160/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:02.613 [161/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:02.613 [162/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:02.613 [163/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:02.613 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:02.613 [165/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:02.613 [166/740] Generating lib/rte_member_def with a custom command 00:02:02.613 [167/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:02.613 [168/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:02.613 [169/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:02.613 [170/740] Linking static target lib/librte_jobstats.a 00:02:02.613 [171/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:02.613 [172/740] Generating lib/rte_member_mingw with a custom command 00:02:02.613 [173/740] Generating lib/rte_pcapng_def with a custom command 00:02:02.613 [174/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:02.613 [175/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:02.613 [176/740] Linking static target lib/librte_cmdline.a 00:02:02.613 [177/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:02.613 [178/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:02.613 [179/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:02.613 [180/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:02.613 [181/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:02.613 [182/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:02.613 [183/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:02.613 [184/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:02.613 [185/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:02.613 [186/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:02.613 [187/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:02.613 [188/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:02.613 [189/740] Generating lib/rte_power_def with a custom command 00:02:02.613 [190/740] Linking static target lib/librte_telemetry.a 00:02:02.613 [191/740] Generating lib/rte_power_mingw with a custom command 00:02:02.613 [192/740] Linking static target lib/librte_metrics.a 00:02:02.613 [193/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:02.613 [194/740] Generating lib/rte_rawdev_def with a custom command 00:02:02.613 [195/740] Linking static target lib/librte_timer.a 00:02:02.613 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:02.613 [197/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:02.613 [198/740] Generating lib/rte_regexdev_def with a custom command 00:02:02.613 [199/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:02.613 [200/740] Generating lib/rte_dmadev_def with a custom command 00:02:02.613 [201/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:02.613 [202/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:02.613 [203/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:02.613 [204/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:02.613 [205/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:02.613 [206/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:02.613 [207/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:02.613 [208/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:02.613 [209/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:02.613 [210/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:02.613 [211/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:02.613 [212/740] Generating lib/rte_rib_def with a custom command 00:02:02.613 [213/740] Generating lib/rte_rib_mingw with a custom command 00:02:02.613 [214/740] Generating lib/rte_reorder_mingw with a custom command 00:02:02.613 [215/740] Generating lib/rte_reorder_def with a custom command 00:02:02.613 [216/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:02.613 [217/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:02.613 [218/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:02.613 [219/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:02.613 [220/740] Linking static target lib/librte_net.a 00:02:02.613 [221/740] Linking static target lib/librte_bitratestats.a 00:02:02.613 [222/740] Generating lib/rte_security_mingw with a custom command 00:02:02.876 [223/740] Generating lib/rte_sched_mingw with a custom command 00:02:02.876 [224/740] Generating lib/rte_security_def with a custom command 00:02:02.876 [225/740] Generating lib/rte_sched_def with a custom command 00:02:02.876 [226/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:02.876 [227/740] Generating lib/rte_stack_def with a custom command 00:02:02.876 [228/740] Generating lib/rte_stack_mingw with a custom command 00:02:02.876 [229/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:02.876 [230/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:02.876 [231/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:02.876 [232/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:02.876 [233/740] Generating lib/rte_vhost_def with a custom command 00:02:02.876 [234/740] Generating lib/rte_vhost_mingw with a custom command 00:02:02.876 [235/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:02.876 [236/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:02.876 [237/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:02.876 [238/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:02.876 [239/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:02.876 [240/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:02.876 [241/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:02.876 [242/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:02.876 [243/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:02.876 [244/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:02.876 [245/740] Generating lib/rte_ipsec_def with a custom command 00:02:02.877 [246/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:02.877 [247/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:02.877 [248/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:02.877 [249/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:02.877 [250/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:02.877 [251/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:02.877 [252/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:02.877 [253/740] Linking static target lib/librte_stack.a 00:02:02.877 [254/740] Generating lib/rte_fib_def with a custom command 00:02:02.877 [255/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:02.877 [256/740] Generating lib/rte_fib_mingw with a custom command 00:02:02.877 [257/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:02.877 [258/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:02.877 [259/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:02.877 [260/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:02.877 [261/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:02.877 [262/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:02.877 [263/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:02.877 [264/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:02.877 [265/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:02.877 [266/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:02.877 [267/740] Generating lib/rte_port_def with a custom command 00:02:02.877 [268/740] Generating lib/rte_port_mingw with a custom command 00:02:02.877 [269/740] Linking static target lib/librte_compressdev.a 00:02:02.877 [270/740] Generating lib/rte_pdump_def with a custom command 00:02:02.877 [271/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:02.877 [272/740] Generating lib/rte_pdump_mingw with a custom command 00:02:02.877 [273/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:02.877 [274/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.138 [275/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:03.138 [276/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:03.138 [277/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:03.138 [278/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:03.138 [279/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:03.138 [280/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.138 [281/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.138 [282/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:03.139 [283/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:03.139 [284/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:03.139 [285/740] Linking static target lib/librte_mempool.a 00:02:03.139 [286/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:03.139 [287/740] Linking static target lib/librte_rcu.a 00:02:03.139 [288/740] Linking static target lib/librte_rawdev.a 00:02:03.139 [289/740] Generating lib/rte_table_def with a custom command 00:02:03.139 [290/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:03.139 [291/740] Generating lib/rte_table_mingw with a custom command 00:02:03.139 [292/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:03.139 [293/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:03.139 [294/740] Linking static target lib/librte_gro.a 00:02:03.139 [295/740] Linking static target lib/librte_bbdev.a 00:02:03.139 [296/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:03.139 [297/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:03.139 [298/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.139 [299/740] Linking static target lib/librte_gpudev.a 00:02:03.139 [300/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:03.139 [301/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:03.139 [302/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:03.139 [303/740] Linking static target lib/librte_dmadev.a 00:02:03.139 [304/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.139 [305/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:03.139 [306/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:03.139 [307/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:03.139 [308/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.139 [309/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.139 [310/740] Generating lib/rte_pipeline_def with a custom command 00:02:03.139 [311/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.139 [312/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:03.139 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:03.139 [314/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:03.139 [315/740] Linking static target lib/librte_gso.a 00:02:03.402 [316/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:03.402 [317/740] Linking static target lib/librte_latencystats.a 00:02:03.402 [318/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:03.402 [319/740] Generating lib/rte_graph_mingw with a custom command 00:02:03.402 [320/740] Generating lib/rte_graph_def with a custom command 00:02:03.402 [321/740] Linking target lib/librte_telemetry.so.23.0 00:02:03.402 [322/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:03.402 [323/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:03.402 [324/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:03.402 [325/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:03.402 [326/740] Linking static target lib/librte_distributor.a 00:02:03.402 [327/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:03.402 [328/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:03.402 [329/740] Linking static target lib/librte_ip_frag.a 00:02:03.402 [330/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:03.402 [331/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:03.402 [332/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:03.402 [333/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:03.402 [334/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:03.402 [335/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:03.402 [336/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:03.402 [337/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:03.402 [338/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:03.402 [339/740] Linking static target lib/librte_regexdev.a 00:02:03.402 [340/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:03.402 [341/740] Generating lib/rte_node_def with a custom command 00:02:03.402 [342/740] Generating lib/rte_node_mingw with a custom command 00:02:03.402 [343/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:03.667 [344/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.667 [345/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:03.667 [346/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:03.667 [347/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.667 [348/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:03.667 [349/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:03.667 [350/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:03.667 [351/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:03.667 [352/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:03.667 [353/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:03.667 [354/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:03.667 [355/740] Linking static target lib/librte_reorder.a 00:02:03.667 [356/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:03.667 [357/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:03.667 [358/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:03.667 [359/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:03.667 [360/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:03.667 [361/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.667 [362/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:03.667 [363/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:03.667 [364/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:03.667 [365/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.667 [366/740] Linking static target lib/librte_power.a 00:02:03.667 [367/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:03.667 [368/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:03.667 [369/740] Linking static target lib/librte_eal.a 00:02:03.667 [370/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:03.667 [371/740] Linking static target lib/librte_pcapng.a 00:02:03.667 [372/740] Linking static target lib/librte_security.a 00:02:03.667 [373/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:03.667 [374/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:03.667 [375/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:03.667 [376/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:03.667 [377/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.667 [378/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:03.667 [379/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:03.667 [380/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:03.667 [381/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:03.667 [382/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:03.667 [383/740] Linking static target lib/librte_mbuf.a 00:02:03.667 [384/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:03.667 [385/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:03.667 [386/740] Linking static target lib/librte_bpf.a 00:02:03.667 [387/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:03.931 [388/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.932 [389/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:03.932 [390/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:03.932 [391/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:03.932 [392/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:03.932 [393/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.932 [394/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:03.932 [395/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:03.932 [396/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:03.932 [397/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:03.932 [398/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:03.932 [399/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:03.932 [400/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:03.932 [401/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:03.932 [402/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:03.932 [403/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:03.932 [404/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:03.932 [405/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:03.932 [406/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:03.932 [407/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:03.932 [408/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:03.932 [409/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:03.932 [410/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:03.932 [411/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:03.932 [412/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:03.932 [413/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:03.932 [414/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:03.932 [415/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:03.932 [416/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:03.932 [417/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:03.932 [418/740] Linking static target lib/librte_rib.a 00:02:03.932 [419/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:03.932 [420/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:03.932 [421/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:03.932 [422/740] Linking static target lib/librte_lpm.a 00:02:04.201 [423/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.201 [424/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:04.201 [425/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:04.201 [426/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:04.201 [427/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.201 [428/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:04.201 [429/740] Linking static target lib/librte_graph.a 00:02:04.201 [430/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:04.201 [431/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.201 [432/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:04.201 [433/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:04.201 [434/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:04.201 [435/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.201 [436/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:04.201 [437/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:04.201 [438/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:04.201 [439/740] Linking static target lib/librte_efd.a 00:02:04.201 [440/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:04.201 [441/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:04.201 [442/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:04.201 [443/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:04.201 [444/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:04.201 [445/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:04.201 [446/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:04.201 [447/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:04.201 [448/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.201 [449/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.463 [450/740] Linking static target drivers/librte_bus_vdev.a 00:02:04.463 [451/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:04.463 [452/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:04.463 [453/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:04.463 [454/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:04.463 [455/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.463 [456/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.463 [457/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:04.463 [458/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.463 [459/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:04.463 [460/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:04.463 [461/740] Linking static target lib/librte_fib.a 00:02:04.463 [462/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:04.463 [463/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.727 [464/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:04.727 [465/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:04.727 [466/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.727 [467/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:04.727 [468/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.727 [469/740] Linking static target lib/librte_pdump.a 00:02:04.727 [470/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:04.727 [471/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:04.727 [472/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.727 [473/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:04.727 [474/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.727 [475/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:04.727 [476/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:04.727 [477/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:04.727 [478/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.727 [479/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:04.727 [480/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.727 [481/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:04.727 [482/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:04.727 [483/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:04.727 [484/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.727 [485/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:04.727 [486/740] Linking static target drivers/librte_bus_pci.a 00:02:04.727 [487/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:04.727 [488/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:04.987 [489/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:04.987 [490/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:04.987 [491/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:04.987 [492/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:04.987 [493/740] Linking static target lib/librte_table.a 00:02:04.987 [494/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:04.987 [495/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:04.987 [496/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:04.987 [497/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:04.987 [498/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:04.987 [499/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:04.987 [500/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:04.987 [501/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.987 [502/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:04.987 [503/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:04.987 [504/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:04.987 [505/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:04.987 [506/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:04.987 [507/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:04.987 [508/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:04.987 [509/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:04.987 [510/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:04.987 [511/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:05.246 [512/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:05.246 [513/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:05.246 [514/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:05.246 [515/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.246 [516/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:05.246 [517/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:05.246 [518/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:05.246 [519/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:05.246 [520/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:05.246 [521/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:05.246 [522/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:05.246 [523/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:05.246 [524/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:05.246 [525/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:05.246 [526/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:05.246 [527/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:05.246 [528/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:05.246 [529/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.246 [530/740] Linking static target lib/librte_cryptodev.a 00:02:05.246 [531/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:05.246 [532/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:05.246 [533/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:05.246 [534/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:05.246 [535/740] Linking static target lib/librte_sched.a 00:02:05.246 [536/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:05.246 [537/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:05.246 [538/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:05.246 [539/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:05.246 [540/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:05.246 [541/740] Linking static target lib/librte_node.a 00:02:05.246 [542/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:05.505 [543/740] Linking static target lib/librte_ipsec.a 00:02:05.506 [544/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:05.506 [545/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:05.506 [546/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.506 [547/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.506 [548/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:05.506 [549/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:05.506 [550/740] Linking static target drivers/librte_mempool_ring.a 00:02:05.506 [551/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:05.506 [552/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:05.506 [553/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:05.506 [554/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:05.506 [555/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:05.506 [556/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:05.506 [557/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:05.506 [558/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:05.506 [559/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:05.506 [560/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:05.506 [561/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:05.506 [562/740] Linking static target lib/librte_ethdev.a 00:02:05.506 [563/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:05.506 [564/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:05.506 [565/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:05.506 [566/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:05.506 [567/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:05.506 [568/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:05.506 [569/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:05.506 [570/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:05.506 [571/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:05.506 [572/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:05.765 [573/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:05.765 [574/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:05.765 [575/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:05.765 [576/740] Linking static target lib/librte_member.a 00:02:05.765 [577/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.765 [578/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:05.765 [579/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:05.765 [580/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:05.765 [581/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:05.765 [582/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:05.765 [583/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:05.765 [584/740] Linking static target lib/librte_port.a 00:02:05.765 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:05.765 [586/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:05.765 [587/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.765 [588/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:05.765 [589/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:05.765 [590/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:06.024 [591/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:06.024 [592/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:06.024 [593/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.024 [594/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:06.024 [595/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:06.024 [596/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:06.024 [597/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:06.024 [598/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:06.024 [599/740] Linking static target lib/librte_hash.a 00:02:06.024 [600/740] Linking static target lib/librte_eventdev.a 00:02:06.024 [601/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:06.025 [602/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:06.025 [603/740] Linking static target lib/librte_acl.a 00:02:06.025 [604/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:06.025 [605/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:06.025 [606/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:06.025 [607/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:06.025 [608/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:06.025 [609/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.284 [610/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:06.284 [611/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:06.284 [612/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:06.543 [613/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.543 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:06.543 [615/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.804 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:06.804 [617/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:07.062 [618/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.321 [619/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:07.321 [620/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:07.581 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:08.149 [622/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:08.149 [623/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:08.408 [624/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:08.408 [625/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:08.408 [626/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:08.408 [627/740] Linking static target drivers/librte_net_i40e.a 00:02:08.976 [628/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:08.976 [629/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.236 [630/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:09.236 [631/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:09.495 [632/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:09.495 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:14.773 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:15.032 [635/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:15.032 [636/740] Linking static target lib/librte_vhost.a 00:02:15.602 [637/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:15.602 [638/740] Linking static target lib/librte_pipeline.a 00:02:16.172 [639/740] Linking target app/dpdk-pdump 00:02:16.172 [640/740] Linking target app/dpdk-dumpcap 00:02:16.172 [641/740] Linking target app/dpdk-proc-info 00:02:16.172 [642/740] Linking target app/dpdk-test-bbdev 00:02:16.172 [643/740] Linking target app/dpdk-test-sad 00:02:16.172 [644/740] Linking target app/dpdk-test-cmdline 00:02:16.172 [645/740] Linking target app/dpdk-test-regex 00:02:16.172 [646/740] Linking target app/dpdk-test-gpudev 00:02:16.172 [647/740] Linking target app/dpdk-test-compress-perf 00:02:16.172 [648/740] Linking target app/dpdk-test-acl 00:02:16.172 [649/740] Linking target app/dpdk-test-pipeline 00:02:16.172 [650/740] Linking target app/dpdk-test-eventdev 00:02:16.172 [651/740] Linking target app/dpdk-test-fib 00:02:16.172 [652/740] Linking target app/dpdk-test-flow-perf 00:02:16.172 [653/740] Linking target app/dpdk-test-crypto-perf 00:02:16.172 [654/740] Linking target app/dpdk-test-security-perf 00:02:16.172 [655/740] Linking target app/dpdk-testpmd 00:02:17.112 [656/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.052 [657/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:18.052 [658/740] Linking target lib/librte_eal.so.23.0 00:02:18.052 [659/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:18.370 [660/740] Linking target lib/librte_meter.so.23.0 00:02:18.370 [661/740] Linking target lib/librte_dmadev.so.23.0 00:02:18.370 [662/740] Linking target lib/librte_ring.so.23.0 00:02:18.370 [663/740] Linking target lib/librte_timer.so.23.0 00:02:18.370 [664/740] Linking target lib/librte_pci.so.23.0 00:02:18.370 [665/740] Linking target lib/librte_cfgfile.so.23.0 00:02:18.370 [666/740] Linking target lib/librte_jobstats.so.23.0 00:02:18.370 [667/740] Linking target lib/librte_rawdev.so.23.0 00:02:18.370 [668/740] Linking target lib/librte_stack.so.23.0 00:02:18.370 [669/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:18.370 [670/740] Linking target lib/librte_graph.so.23.0 00:02:18.370 [671/740] Linking target lib/librte_acl.so.23.0 00:02:18.370 [672/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:18.370 [673/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:18.370 [674/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:18.370 [675/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:18.370 [676/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:18.370 [677/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:18.370 [678/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:18.370 [679/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:18.370 [680/740] Linking target lib/librte_mempool.so.23.0 00:02:18.370 [681/740] Linking target lib/librte_rcu.so.23.0 00:02:18.370 [682/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:18.629 [683/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:18.629 [684/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:18.629 [685/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:18.629 [686/740] Linking target lib/librte_rib.so.23.0 00:02:18.629 [687/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:18.629 [688/740] Linking target lib/librte_mbuf.so.23.0 00:02:18.629 [689/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:18.629 [690/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:18.888 [691/740] Linking target lib/librte_fib.so.23.0 00:02:18.888 [692/740] Linking target lib/librte_bbdev.so.23.0 00:02:18.888 [693/740] Linking target lib/librte_regexdev.so.23.0 00:02:18.888 [694/740] Linking target lib/librte_compressdev.so.23.0 00:02:18.888 [695/740] Linking target lib/librte_net.so.23.0 00:02:18.888 [696/740] Linking target lib/librte_distributor.so.23.0 00:02:18.888 [697/740] Linking target lib/librte_gpudev.so.23.0 00:02:18.888 [698/740] Linking target lib/librte_cryptodev.so.23.0 00:02:18.888 [699/740] Linking target lib/librte_reorder.so.23.0 00:02:18.888 [700/740] Linking target lib/librte_sched.so.23.0 00:02:18.888 [701/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:18.888 [702/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:18.888 [703/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:18.888 [704/740] Linking target lib/librte_cmdline.so.23.0 00:02:18.888 [705/740] Linking target lib/librte_security.so.23.0 00:02:18.888 [706/740] Linking target lib/librte_hash.so.23.0 00:02:19.147 [707/740] Linking target lib/librte_ethdev.so.23.0 00:02:19.147 [708/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:19.147 [709/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:19.147 [710/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:19.147 [711/740] Linking target lib/librte_lpm.so.23.0 00:02:19.147 [712/740] Linking target lib/librte_efd.so.23.0 00:02:19.147 [713/740] Linking target lib/librte_member.so.23.0 00:02:19.147 [714/740] Linking target lib/librte_ipsec.so.23.0 00:02:19.147 [715/740] Linking target lib/librte_metrics.so.23.0 00:02:19.147 [716/740] Linking target lib/librte_pcapng.so.23.0 00:02:19.147 [717/740] Linking target lib/librte_gso.so.23.0 00:02:19.147 [718/740] Linking target lib/librte_ip_frag.so.23.0 00:02:19.147 [719/740] Linking target lib/librte_gro.so.23.0 00:02:19.147 [720/740] Linking target lib/librte_bpf.so.23.0 00:02:19.147 [721/740] Linking target lib/librte_power.so.23.0 00:02:19.147 [722/740] Linking target lib/librte_eventdev.so.23.0 00:02:19.147 [723/740] Linking target lib/librte_vhost.so.23.0 00:02:19.405 [724/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:19.405 [725/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:19.405 [726/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:19.405 [727/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:19.405 [728/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:19.405 [729/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:19.405 [730/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:19.405 [731/740] Linking target lib/librte_node.so.23.0 00:02:19.405 [732/740] Linking target lib/librte_latencystats.so.23.0 00:02:19.405 [733/740] Linking target lib/librte_bitratestats.so.23.0 00:02:19.405 [734/740] Linking target lib/librte_pdump.so.23.0 00:02:19.405 [735/740] Linking target lib/librte_port.so.23.0 00:02:19.664 [736/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:19.664 [737/740] Linking target lib/librte_table.so.23.0 00:02:19.664 [738/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:21.046 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:21.046 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:21.046 15:52:51 -- common/autobuild_common.sh@190 -- $ ninja -C /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp -j112 install 00:02:21.046 ninja: Entering directory `/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp' 00:02:21.046 [0/1] Installing files. 00:02:21.310 Installing subdir /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.310 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.311 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:21.312 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.313 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.314 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:21.315 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.576 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:21.577 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:21.577 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.577 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:21.578 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:21.578 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:21.578 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:21.578 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:21.578 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include/generic 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.578 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.579 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.579 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.842 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.843 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.844 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/bin 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:21.845 Installing /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig 00:02:21.845 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:21.845 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:21.845 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:21.845 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:21.845 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:21.845 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:21.845 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:21.845 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:21.845 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:21.845 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:21.845 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:21.845 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:21.845 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:21.845 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:21.845 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:21.845 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_net.so 00:02:21.845 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:21.845 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:21.845 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:21.845 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:21.845 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:21.845 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:21.845 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:21.845 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:21.845 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:21.845 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:21.845 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:21.845 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:21.845 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:21.845 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:21.846 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:21.846 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:21.846 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:21.846 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:21.846 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:21.846 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:21.846 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:21.846 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:21.846 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:21.846 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:21.846 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:21.846 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:21.846 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:21.846 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:21.846 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:21.846 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:21.846 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:21.846 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:21.846 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:21.846 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:21.846 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:21.846 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:21.846 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:21.846 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:21.846 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:21.846 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:21.846 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:21.846 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:21.846 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:21.846 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:21.846 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:21.846 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:21.846 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:21.846 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:21.846 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:21.846 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_member.so 00:02:21.846 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:21.846 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:21.846 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:21.846 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_power.so 00:02:21.846 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:21.846 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:21.846 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:21.846 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:21.846 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:21.846 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:21.846 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:21.846 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:21.846 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:21.846 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:21.846 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:21.846 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:21.846 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:21.846 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_security.so 00:02:21.846 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:21.846 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:21.846 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:21.846 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:21.846 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:21.846 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:21.846 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:21.846 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:21.846 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:21.846 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_port.so 00:02:21.846 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:21.846 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:21.846 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:21.846 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:21.846 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:21.846 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:21.846 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:21.846 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:21.846 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:21.846 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:21.846 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:21.846 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:21.846 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:21.846 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:21.847 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:21.847 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_table.so 00:02:21.847 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:21.847 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:21.847 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:21.847 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:21.847 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:21.847 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/librte_node.so 00:02:21.847 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:21.847 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:21.847 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:21.847 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:21.847 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:21.847 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:21.847 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:21.847 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:21.847 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:21.847 15:52:52 -- common/autobuild_common.sh@192 -- $ uname -s 00:02:21.847 15:52:52 -- common/autobuild_common.sh@192 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:21.847 15:52:52 -- common/autobuild_common.sh@203 -- $ cat 00:02:21.847 15:52:52 -- common/autobuild_common.sh@208 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:02:21.847 00:02:21.847 real 0m26.223s 00:02:21.847 user 6m36.848s 00:02:21.847 sys 2m14.954s 00:02:21.847 15:52:52 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:02:21.847 15:52:52 -- common/autotest_common.sh@10 -- $ set +x 00:02:21.847 ************************************ 00:02:21.847 END TEST build_native_dpdk 00:02:21.847 ************************************ 00:02:21.847 15:52:52 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:21.847 15:52:52 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:21.847 15:52:52 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:21.847 15:52:52 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:21.847 15:52:52 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:21.847 15:52:52 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:21.847 15:52:52 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:21.847 15:52:52 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build --with-shared 00:02:22.107 Using /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:22.107 DPDK libraries: /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:02:22.107 DPDK includes: //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:02:22.107 Using default SPDK env in /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:02:22.676 Using 'verbs' RDMA provider 00:02:38.141 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l/spdk-isal.log)...done. 00:02:50.359 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-phy-autotest/spdk/isa-l-crypto/spdk-isal-crypto.log)...done. 00:02:50.359 Creating mk/config.mk...done. 00:02:50.359 Creating mk/cc.flags.mk...done. 00:02:50.359 Type 'make' to build. 00:02:50.359 15:53:20 -- spdk/autobuild.sh@69 -- $ run_test make make -j112 00:02:50.359 15:53:20 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:02:50.359 15:53:20 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:02:50.359 15:53:20 -- common/autotest_common.sh@10 -- $ set +x 00:02:50.359 ************************************ 00:02:50.359 START TEST make 00:02:50.359 ************************************ 00:02:50.359 15:53:20 -- common/autotest_common.sh@1114 -- $ make -j112 00:02:50.359 make[1]: Nothing to be done for 'all'. 00:03:00.349 CC lib/ut_mock/mock.o 00:03:00.349 CC lib/ut/ut.o 00:03:00.349 CC lib/log/log.o 00:03:00.349 CC lib/log/log_flags.o 00:03:00.349 CC lib/log/log_deprecated.o 00:03:00.349 LIB libspdk_ut_mock.a 00:03:00.349 LIB libspdk_log.a 00:03:00.349 LIB libspdk_ut.a 00:03:00.349 SO libspdk_ut_mock.so.5.0 00:03:00.349 SO libspdk_ut.so.1.0 00:03:00.349 SO libspdk_log.so.6.1 00:03:00.349 SYMLINK libspdk_ut_mock.so 00:03:00.349 SYMLINK libspdk_log.so 00:03:00.349 SYMLINK libspdk_ut.so 00:03:00.349 CXX lib/trace_parser/trace.o 00:03:00.349 CC lib/ioat/ioat.o 00:03:00.349 CC lib/util/base64.o 00:03:00.349 CC lib/dma/dma.o 00:03:00.349 CC lib/util/bit_array.o 00:03:00.349 CC lib/util/cpuset.o 00:03:00.349 CC lib/util/crc16.o 00:03:00.349 CC lib/util/crc32.o 00:03:00.349 CC lib/util/crc32c.o 00:03:00.349 CC lib/util/crc32_ieee.o 00:03:00.349 CC lib/util/crc64.o 00:03:00.349 CC lib/util/dif.o 00:03:00.349 CC lib/util/fd.o 00:03:00.349 CC lib/util/file.o 00:03:00.349 CC lib/util/hexlify.o 00:03:00.349 CC lib/util/iov.o 00:03:00.349 CC lib/util/math.o 00:03:00.349 CC lib/util/pipe.o 00:03:00.349 CC lib/util/strerror_tls.o 00:03:00.349 CC lib/util/string.o 00:03:00.349 CC lib/util/uuid.o 00:03:00.349 CC lib/util/fd_group.o 00:03:00.349 CC lib/util/xor.o 00:03:00.349 CC lib/util/zipf.o 00:03:00.349 CC lib/vfio_user/host/vfio_user.o 00:03:00.349 CC lib/vfio_user/host/vfio_user_pci.o 00:03:00.349 LIB libspdk_dma.a 00:03:00.349 SO libspdk_dma.so.3.0 00:03:00.349 LIB libspdk_ioat.a 00:03:00.349 SYMLINK libspdk_dma.so 00:03:00.349 SO libspdk_ioat.so.6.0 00:03:00.349 LIB libspdk_vfio_user.a 00:03:00.349 SYMLINK libspdk_ioat.so 00:03:00.349 SO libspdk_vfio_user.so.4.0 00:03:00.607 LIB libspdk_util.a 00:03:00.607 SYMLINK libspdk_vfio_user.so 00:03:00.607 SO libspdk_util.so.8.0 00:03:00.607 SYMLINK libspdk_util.so 00:03:00.866 LIB libspdk_trace_parser.a 00:03:00.866 SO libspdk_trace_parser.so.4.0 00:03:00.866 SYMLINK libspdk_trace_parser.so 00:03:00.866 CC lib/vmd/vmd.o 00:03:00.866 CC lib/vmd/led.o 00:03:00.866 CC lib/conf/conf.o 00:03:00.866 CC lib/idxd/idxd.o 00:03:00.866 CC lib/json/json_parse.o 00:03:00.866 CC lib/env_dpdk/env.o 00:03:00.866 CC lib/idxd/idxd_user.o 00:03:00.866 CC lib/json/json_util.o 00:03:00.866 CC lib/env_dpdk/memory.o 00:03:00.866 CC lib/idxd/idxd_kernel.o 00:03:00.866 CC lib/json/json_write.o 00:03:00.866 CC lib/env_dpdk/pci.o 00:03:00.866 CC lib/env_dpdk/init.o 00:03:00.866 CC lib/env_dpdk/threads.o 00:03:00.866 CC lib/env_dpdk/pci_ioat.o 00:03:00.866 CC lib/rdma/common.o 00:03:00.866 CC lib/rdma/rdma_verbs.o 00:03:00.866 CC lib/env_dpdk/pci_virtio.o 00:03:00.866 CC lib/env_dpdk/pci_vmd.o 00:03:00.866 CC lib/env_dpdk/pci_idxd.o 00:03:00.866 CC lib/env_dpdk/pci_event.o 00:03:00.866 CC lib/env_dpdk/sigbus_handler.o 00:03:00.866 CC lib/env_dpdk/pci_dpdk.o 00:03:00.866 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:00.866 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:01.125 LIB libspdk_conf.a 00:03:01.125 SO libspdk_conf.so.5.0 00:03:01.125 LIB libspdk_json.a 00:03:01.125 LIB libspdk_rdma.a 00:03:01.125 SYMLINK libspdk_conf.so 00:03:01.125 SO libspdk_json.so.5.1 00:03:01.384 SO libspdk_rdma.so.5.0 00:03:01.384 SYMLINK libspdk_json.so 00:03:01.384 SYMLINK libspdk_rdma.so 00:03:01.384 LIB libspdk_idxd.a 00:03:01.384 SO libspdk_idxd.so.11.0 00:03:01.384 LIB libspdk_vmd.a 00:03:01.384 SO libspdk_vmd.so.5.0 00:03:01.384 SYMLINK libspdk_idxd.so 00:03:01.644 SYMLINK libspdk_vmd.so 00:03:01.644 CC lib/jsonrpc/jsonrpc_server.o 00:03:01.644 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:01.644 CC lib/jsonrpc/jsonrpc_client.o 00:03:01.644 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:01.644 LIB libspdk_jsonrpc.a 00:03:01.904 SO libspdk_jsonrpc.so.5.1 00:03:01.904 SYMLINK libspdk_jsonrpc.so 00:03:01.904 LIB libspdk_env_dpdk.a 00:03:01.904 SO libspdk_env_dpdk.so.13.0 00:03:02.163 SYMLINK libspdk_env_dpdk.so 00:03:02.163 CC lib/rpc/rpc.o 00:03:02.163 LIB libspdk_rpc.a 00:03:02.423 SO libspdk_rpc.so.5.0 00:03:02.423 SYMLINK libspdk_rpc.so 00:03:02.682 CC lib/trace/trace.o 00:03:02.682 CC lib/trace/trace_flags.o 00:03:02.682 CC lib/trace/trace_rpc.o 00:03:02.682 CC lib/notify/notify.o 00:03:02.682 CC lib/notify/notify_rpc.o 00:03:02.682 CC lib/sock/sock.o 00:03:02.682 CC lib/sock/sock_rpc.o 00:03:02.682 LIB libspdk_notify.a 00:03:02.942 LIB libspdk_trace.a 00:03:02.942 SO libspdk_notify.so.5.0 00:03:02.942 SO libspdk_trace.so.9.0 00:03:02.942 SYMLINK libspdk_notify.so 00:03:02.942 LIB libspdk_sock.a 00:03:02.942 SYMLINK libspdk_trace.so 00:03:02.942 SO libspdk_sock.so.8.0 00:03:02.942 SYMLINK libspdk_sock.so 00:03:03.202 CC lib/thread/thread.o 00:03:03.202 CC lib/thread/iobuf.o 00:03:03.202 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:03.202 CC lib/nvme/nvme_ctrlr.o 00:03:03.202 CC lib/nvme/nvme_fabric.o 00:03:03.202 CC lib/nvme/nvme_ns_cmd.o 00:03:03.202 CC lib/nvme/nvme_ns.o 00:03:03.202 CC lib/nvme/nvme_pcie_common.o 00:03:03.202 CC lib/nvme/nvme_pcie.o 00:03:03.202 CC lib/nvme/nvme_qpair.o 00:03:03.202 CC lib/nvme/nvme.o 00:03:03.202 CC lib/nvme/nvme_quirks.o 00:03:03.202 CC lib/nvme/nvme_transport.o 00:03:03.202 CC lib/nvme/nvme_discovery.o 00:03:03.202 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:03.202 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:03.202 CC lib/nvme/nvme_tcp.o 00:03:03.202 CC lib/nvme/nvme_opal.o 00:03:03.202 CC lib/nvme/nvme_io_msg.o 00:03:03.202 CC lib/nvme/nvme_poll_group.o 00:03:03.202 CC lib/nvme/nvme_zns.o 00:03:03.202 CC lib/nvme/nvme_cuse.o 00:03:03.202 CC lib/nvme/nvme_vfio_user.o 00:03:03.202 CC lib/nvme/nvme_rdma.o 00:03:04.581 LIB libspdk_thread.a 00:03:04.581 SO libspdk_thread.so.9.0 00:03:04.581 SYMLINK libspdk_thread.so 00:03:04.581 CC lib/virtio/virtio.o 00:03:04.581 CC lib/blob/blobstore.o 00:03:04.581 CC lib/init/subsystem.o 00:03:04.581 CC lib/blob/request.o 00:03:04.581 CC lib/virtio/virtio_vhost_user.o 00:03:04.581 CC lib/init/json_config.o 00:03:04.581 CC lib/accel/accel.o 00:03:04.581 CC lib/blob/zeroes.o 00:03:04.581 CC lib/virtio/virtio_vfio_user.o 00:03:04.581 CC lib/accel/accel_rpc.o 00:03:04.581 CC lib/blob/blob_bs_dev.o 00:03:04.581 CC lib/virtio/virtio_pci.o 00:03:04.581 CC lib/init/subsystem_rpc.o 00:03:04.581 CC lib/accel/accel_sw.o 00:03:04.581 CC lib/init/rpc.o 00:03:04.840 LIB libspdk_nvme.a 00:03:04.840 LIB libspdk_init.a 00:03:04.840 SO libspdk_init.so.4.0 00:03:04.840 LIB libspdk_virtio.a 00:03:04.840 SO libspdk_nvme.so.12.0 00:03:04.840 SO libspdk_virtio.so.6.0 00:03:04.840 SYMLINK libspdk_init.so 00:03:05.100 SYMLINK libspdk_virtio.so 00:03:05.100 SYMLINK libspdk_nvme.so 00:03:05.100 CC lib/event/app.o 00:03:05.100 CC lib/event/reactor.o 00:03:05.100 CC lib/event/log_rpc.o 00:03:05.100 CC lib/event/app_rpc.o 00:03:05.100 CC lib/event/scheduler_static.o 00:03:05.360 LIB libspdk_accel.a 00:03:05.360 SO libspdk_accel.so.14.0 00:03:05.360 SYMLINK libspdk_accel.so 00:03:05.620 LIB libspdk_event.a 00:03:05.620 SO libspdk_event.so.12.0 00:03:05.620 SYMLINK libspdk_event.so 00:03:05.620 CC lib/bdev/bdev.o 00:03:05.620 CC lib/bdev/bdev_rpc.o 00:03:05.620 CC lib/bdev/bdev_zone.o 00:03:05.620 CC lib/bdev/part.o 00:03:05.620 CC lib/bdev/scsi_nvme.o 00:03:06.559 LIB libspdk_blob.a 00:03:06.559 SO libspdk_blob.so.10.1 00:03:06.559 SYMLINK libspdk_blob.so 00:03:06.819 CC lib/blobfs/blobfs.o 00:03:06.819 CC lib/blobfs/tree.o 00:03:06.819 CC lib/lvol/lvol.o 00:03:07.388 LIB libspdk_blobfs.a 00:03:07.646 SO libspdk_blobfs.so.9.0 00:03:07.646 LIB libspdk_lvol.a 00:03:07.646 LIB libspdk_bdev.a 00:03:07.646 SO libspdk_lvol.so.9.1 00:03:07.646 SYMLINK libspdk_blobfs.so 00:03:07.646 SO libspdk_bdev.so.14.0 00:03:07.646 SYMLINK libspdk_lvol.so 00:03:07.646 SYMLINK libspdk_bdev.so 00:03:07.905 CC lib/nbd/nbd.o 00:03:07.905 CC lib/scsi/dev.o 00:03:07.905 CC lib/nvmf/ctrlr.o 00:03:07.905 CC lib/nbd/nbd_rpc.o 00:03:07.905 CC lib/nvmf/ctrlr_discovery.o 00:03:07.905 CC lib/scsi/lun.o 00:03:07.905 CC lib/nvmf/ctrlr_bdev.o 00:03:07.905 CC lib/ftl/ftl_core.o 00:03:07.905 CC lib/scsi/port.o 00:03:07.905 CC lib/scsi/scsi.o 00:03:07.905 CC lib/nvmf/subsystem.o 00:03:07.905 CC lib/ftl/ftl_layout.o 00:03:07.905 CC lib/ftl/ftl_init.o 00:03:07.905 CC lib/nvmf/nvmf.o 00:03:07.905 CC lib/scsi/scsi_bdev.o 00:03:07.905 CC lib/nvmf/nvmf_rpc.o 00:03:07.905 CC lib/ftl/ftl_debug.o 00:03:07.905 CC lib/scsi/scsi_pr.o 00:03:07.905 CC lib/nvmf/transport.o 00:03:07.905 CC lib/ftl/ftl_io.o 00:03:07.905 CC lib/scsi/scsi_rpc.o 00:03:07.905 CC lib/nvmf/tcp.o 00:03:07.905 CC lib/scsi/task.o 00:03:07.905 CC lib/ublk/ublk.o 00:03:07.905 CC lib/ftl/ftl_sb.o 00:03:07.905 CC lib/ftl/ftl_l2p.o 00:03:07.905 CC lib/ublk/ublk_rpc.o 00:03:07.905 CC lib/nvmf/rdma.o 00:03:07.905 CC lib/ftl/ftl_l2p_flat.o 00:03:07.905 CC lib/ftl/ftl_nv_cache.o 00:03:07.905 CC lib/ftl/ftl_band.o 00:03:07.905 CC lib/ftl/ftl_band_ops.o 00:03:07.905 CC lib/ftl/ftl_writer.o 00:03:07.905 CC lib/ftl/ftl_rq.o 00:03:07.905 CC lib/ftl/ftl_reloc.o 00:03:07.905 CC lib/ftl/ftl_l2p_cache.o 00:03:07.905 CC lib/ftl/ftl_p2l.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:07.905 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:07.905 CC lib/ftl/utils/ftl_conf.o 00:03:07.905 CC lib/ftl/utils/ftl_md.o 00:03:07.905 CC lib/ftl/utils/ftl_mempool.o 00:03:07.905 CC lib/ftl/utils/ftl_bitmap.o 00:03:07.905 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:07.906 CC lib/ftl/utils/ftl_property.o 00:03:07.906 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:07.906 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:07.906 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:07.906 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:07.906 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:07.906 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:07.906 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:07.906 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:07.906 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:07.906 CC lib/ftl/base/ftl_base_dev.o 00:03:07.906 CC lib/ftl/base/ftl_base_bdev.o 00:03:07.906 CC lib/ftl/ftl_trace.o 00:03:08.529 LIB libspdk_nbd.a 00:03:08.529 SO libspdk_nbd.so.6.0 00:03:08.529 LIB libspdk_scsi.a 00:03:08.529 SYMLINK libspdk_nbd.so 00:03:08.529 SO libspdk_scsi.so.8.0 00:03:08.530 LIB libspdk_ublk.a 00:03:08.530 SYMLINK libspdk_scsi.so 00:03:08.530 SO libspdk_ublk.so.2.0 00:03:08.797 SYMLINK libspdk_ublk.so 00:03:08.797 LIB libspdk_ftl.a 00:03:08.797 CC lib/vhost/vhost.o 00:03:08.797 CC lib/vhost/vhost_rpc.o 00:03:08.797 CC lib/vhost/vhost_scsi.o 00:03:08.797 CC lib/vhost/vhost_blk.o 00:03:08.797 CC lib/vhost/rte_vhost_user.o 00:03:08.797 CC lib/iscsi/conn.o 00:03:08.797 CC lib/iscsi/iscsi.o 00:03:08.797 CC lib/iscsi/init_grp.o 00:03:08.797 CC lib/iscsi/md5.o 00:03:08.797 CC lib/iscsi/portal_grp.o 00:03:08.797 CC lib/iscsi/param.o 00:03:08.797 CC lib/iscsi/tgt_node.o 00:03:08.797 CC lib/iscsi/iscsi_subsystem.o 00:03:08.797 CC lib/iscsi/iscsi_rpc.o 00:03:08.797 CC lib/iscsi/task.o 00:03:08.797 SO libspdk_ftl.so.8.0 00:03:09.056 SYMLINK libspdk_ftl.so 00:03:09.625 LIB libspdk_vhost.a 00:03:09.625 LIB libspdk_nvmf.a 00:03:09.625 SO libspdk_vhost.so.7.1 00:03:09.625 SO libspdk_nvmf.so.17.0 00:03:09.625 SYMLINK libspdk_vhost.so 00:03:09.888 SYMLINK libspdk_nvmf.so 00:03:09.888 LIB libspdk_iscsi.a 00:03:09.888 SO libspdk_iscsi.so.7.0 00:03:09.888 SYMLINK libspdk_iscsi.so 00:03:10.457 CC module/env_dpdk/env_dpdk_rpc.o 00:03:10.457 CC module/blob/bdev/blob_bdev.o 00:03:10.458 CC module/sock/posix/posix.o 00:03:10.458 CC module/accel/iaa/accel_iaa.o 00:03:10.458 CC module/accel/iaa/accel_iaa_rpc.o 00:03:10.458 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.458 CC module/accel/error/accel_error_rpc.o 00:03:10.458 CC module/accel/error/accel_error.o 00:03:10.458 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.458 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.458 CC module/accel/ioat/accel_ioat_rpc.o 00:03:10.458 CC module/accel/ioat/accel_ioat.o 00:03:10.458 CC module/accel/dsa/accel_dsa.o 00:03:10.458 CC module/accel/dsa/accel_dsa_rpc.o 00:03:10.458 LIB libspdk_env_dpdk_rpc.a 00:03:10.458 SO libspdk_env_dpdk_rpc.so.5.0 00:03:10.717 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.717 LIB libspdk_scheduler_gscheduler.a 00:03:10.717 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.717 LIB libspdk_accel_ioat.a 00:03:10.717 SO libspdk_scheduler_gscheduler.so.3.0 00:03:10.717 LIB libspdk_accel_iaa.a 00:03:10.717 LIB libspdk_accel_error.a 00:03:10.717 LIB libspdk_scheduler_dynamic.a 00:03:10.717 SO libspdk_scheduler_dpdk_governor.so.3.0 00:03:10.717 SO libspdk_accel_ioat.so.5.0 00:03:10.717 LIB libspdk_blob_bdev.a 00:03:10.717 LIB libspdk_accel_dsa.a 00:03:10.717 SO libspdk_accel_error.so.1.0 00:03:10.717 SO libspdk_scheduler_dynamic.so.3.0 00:03:10.717 SO libspdk_accel_iaa.so.2.0 00:03:10.717 SYMLINK libspdk_scheduler_gscheduler.so 00:03:10.717 SO libspdk_blob_bdev.so.10.1 00:03:10.717 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.717 SO libspdk_accel_dsa.so.4.0 00:03:10.717 SYMLINK libspdk_accel_ioat.so 00:03:10.717 SYMLINK libspdk_accel_error.so 00:03:10.717 SYMLINK libspdk_scheduler_dynamic.so 00:03:10.717 SYMLINK libspdk_accel_iaa.so 00:03:10.717 SYMLINK libspdk_blob_bdev.so 00:03:10.717 SYMLINK libspdk_accel_dsa.so 00:03:10.976 LIB libspdk_sock_posix.a 00:03:10.976 SO libspdk_sock_posix.so.5.0 00:03:11.235 CC module/bdev/delay/vbdev_delay.o 00:03:11.235 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:11.235 CC module/bdev/malloc/bdev_malloc.o 00:03:11.235 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.235 CC module/bdev/passthru/vbdev_passthru.o 00:03:11.235 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:11.235 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:11.235 CC module/bdev/nvme/bdev_nvme.o 00:03:11.236 CC module/bdev/split/vbdev_split.o 00:03:11.236 CC module/bdev/nvme/vbdev_opal.o 00:03:11.236 CC module/bdev/null/bdev_null.o 00:03:11.236 CC module/bdev/split/vbdev_split_rpc.o 00:03:11.236 CC module/bdev/nvme/nvme_rpc.o 00:03:11.236 CC module/bdev/null/bdev_null_rpc.o 00:03:11.236 CC module/bdev/nvme/bdev_mdns_client.o 00:03:11.236 CC module/bdev/error/vbdev_error.o 00:03:11.236 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:11.236 CC module/bdev/error/vbdev_error_rpc.o 00:03:11.236 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:11.236 CC module/bdev/raid/bdev_raid.o 00:03:11.236 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.236 CC module/bdev/gpt/gpt.o 00:03:11.236 SYMLINK libspdk_sock_posix.so 00:03:11.236 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:11.236 CC module/bdev/raid/bdev_raid_rpc.o 00:03:11.236 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:11.236 CC module/bdev/ftl/bdev_ftl.o 00:03:11.236 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:11.236 CC module/bdev/iscsi/bdev_iscsi.o 00:03:11.236 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.236 CC module/bdev/raid/bdev_raid_sb.o 00:03:11.236 CC module/bdev/raid/raid1.o 00:03:11.236 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:11.236 CC module/bdev/raid/concat.o 00:03:11.236 CC module/bdev/raid/raid0.o 00:03:11.236 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.236 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:11.236 CC module/blobfs/bdev/blobfs_bdev.o 00:03:11.236 CC module/bdev/aio/bdev_aio_rpc.o 00:03:11.236 CC module/bdev/aio/bdev_aio.o 00:03:11.236 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:11.236 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:11.236 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:11.495 LIB libspdk_blobfs_bdev.a 00:03:11.495 LIB libspdk_bdev_split.a 00:03:11.495 SO libspdk_blobfs_bdev.so.5.0 00:03:11.495 LIB libspdk_bdev_gpt.a 00:03:11.495 SO libspdk_bdev_split.so.5.0 00:03:11.495 LIB libspdk_bdev_null.a 00:03:11.495 LIB libspdk_bdev_ftl.a 00:03:11.495 LIB libspdk_bdev_error.a 00:03:11.495 LIB libspdk_bdev_passthru.a 00:03:11.495 SO libspdk_bdev_gpt.so.5.0 00:03:11.495 SO libspdk_bdev_null.so.5.0 00:03:11.495 SYMLINK libspdk_blobfs_bdev.so 00:03:11.495 LIB libspdk_bdev_malloc.a 00:03:11.495 LIB libspdk_bdev_zone_block.a 00:03:11.495 SO libspdk_bdev_ftl.so.5.0 00:03:11.495 SYMLINK libspdk_bdev_split.so 00:03:11.495 LIB libspdk_bdev_delay.a 00:03:11.495 SO libspdk_bdev_error.so.5.0 00:03:11.495 LIB libspdk_bdev_aio.a 00:03:11.495 SO libspdk_bdev_passthru.so.5.0 00:03:11.495 SO libspdk_bdev_malloc.so.5.0 00:03:11.495 LIB libspdk_bdev_iscsi.a 00:03:11.495 SO libspdk_bdev_zone_block.so.5.0 00:03:11.495 SYMLINK libspdk_bdev_gpt.so 00:03:11.495 SYMLINK libspdk_bdev_null.so 00:03:11.495 SO libspdk_bdev_aio.so.5.0 00:03:11.495 SO libspdk_bdev_delay.so.5.0 00:03:11.495 SYMLINK libspdk_bdev_ftl.so 00:03:11.496 SO libspdk_bdev_iscsi.so.5.0 00:03:11.496 SYMLINK libspdk_bdev_error.so 00:03:11.496 SYMLINK libspdk_bdev_passthru.so 00:03:11.496 SYMLINK libspdk_bdev_malloc.so 00:03:11.496 SYMLINK libspdk_bdev_zone_block.so 00:03:11.496 SYMLINK libspdk_bdev_delay.so 00:03:11.755 SYMLINK libspdk_bdev_aio.so 00:03:11.755 LIB libspdk_bdev_lvol.a 00:03:11.755 SYMLINK libspdk_bdev_iscsi.so 00:03:11.755 LIB libspdk_bdev_virtio.a 00:03:11.755 SO libspdk_bdev_lvol.so.5.0 00:03:11.755 SO libspdk_bdev_virtio.so.5.0 00:03:11.755 SYMLINK libspdk_bdev_lvol.so 00:03:11.755 SYMLINK libspdk_bdev_virtio.so 00:03:12.015 LIB libspdk_bdev_raid.a 00:03:12.015 SO libspdk_bdev_raid.so.5.0 00:03:12.015 SYMLINK libspdk_bdev_raid.so 00:03:12.953 LIB libspdk_bdev_nvme.a 00:03:12.953 SO libspdk_bdev_nvme.so.6.0 00:03:12.953 SYMLINK libspdk_bdev_nvme.so 00:03:13.521 CC module/event/subsystems/scheduler/scheduler.o 00:03:13.521 CC module/event/subsystems/iobuf/iobuf.o 00:03:13.521 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:13.521 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:13.521 CC module/event/subsystems/vmd/vmd.o 00:03:13.521 CC module/event/subsystems/sock/sock.o 00:03:13.521 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:13.521 LIB libspdk_event_sock.a 00:03:13.521 LIB libspdk_event_vhost_blk.a 00:03:13.521 LIB libspdk_event_scheduler.a 00:03:13.521 LIB libspdk_event_iobuf.a 00:03:13.521 LIB libspdk_event_vmd.a 00:03:13.521 SO libspdk_event_sock.so.4.0 00:03:13.521 SO libspdk_event_vhost_blk.so.2.0 00:03:13.521 SO libspdk_event_iobuf.so.2.0 00:03:13.521 SO libspdk_event_scheduler.so.3.0 00:03:13.521 SO libspdk_event_vmd.so.5.0 00:03:13.521 SYMLINK libspdk_event_scheduler.so 00:03:13.521 SYMLINK libspdk_event_sock.so 00:03:13.521 SYMLINK libspdk_event_vhost_blk.so 00:03:13.521 SYMLINK libspdk_event_iobuf.so 00:03:13.521 SYMLINK libspdk_event_vmd.so 00:03:13.780 CC module/event/subsystems/accel/accel.o 00:03:14.040 LIB libspdk_event_accel.a 00:03:14.040 SO libspdk_event_accel.so.5.0 00:03:14.040 SYMLINK libspdk_event_accel.so 00:03:14.298 CC module/event/subsystems/bdev/bdev.o 00:03:14.558 LIB libspdk_event_bdev.a 00:03:14.558 SO libspdk_event_bdev.so.5.0 00:03:14.558 SYMLINK libspdk_event_bdev.so 00:03:14.817 CC module/event/subsystems/scsi/scsi.o 00:03:14.817 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:14.817 CC module/event/subsystems/ublk/ublk.o 00:03:14.817 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:14.817 CC module/event/subsystems/nbd/nbd.o 00:03:15.076 LIB libspdk_event_ublk.a 00:03:15.076 LIB libspdk_event_nbd.a 00:03:15.076 LIB libspdk_event_scsi.a 00:03:15.076 SO libspdk_event_ublk.so.2.0 00:03:15.076 LIB libspdk_event_nvmf.a 00:03:15.076 SO libspdk_event_nbd.so.5.0 00:03:15.076 SO libspdk_event_scsi.so.5.0 00:03:15.076 SYMLINK libspdk_event_ublk.so 00:03:15.076 SO libspdk_event_nvmf.so.5.0 00:03:15.076 SYMLINK libspdk_event_nbd.so 00:03:15.076 SYMLINK libspdk_event_scsi.so 00:03:15.076 SYMLINK libspdk_event_nvmf.so 00:03:15.336 CC module/event/subsystems/iscsi/iscsi.o 00:03:15.336 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:15.595 LIB libspdk_event_vhost_scsi.a 00:03:15.595 LIB libspdk_event_iscsi.a 00:03:15.595 SO libspdk_event_vhost_scsi.so.2.0 00:03:15.595 SO libspdk_event_iscsi.so.5.0 00:03:15.595 SYMLINK libspdk_event_vhost_scsi.so 00:03:15.595 SYMLINK libspdk_event_iscsi.so 00:03:15.852 SO libspdk.so.5.0 00:03:15.852 SYMLINK libspdk.so 00:03:16.122 CXX app/trace/trace.o 00:03:16.122 CC app/spdk_lspci/spdk_lspci.o 00:03:16.122 CC app/spdk_nvme_perf/perf.o 00:03:16.122 CC app/trace_record/trace_record.o 00:03:16.122 CC app/spdk_nvme_identify/identify.o 00:03:16.122 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.122 TEST_HEADER include/spdk/accel_module.h 00:03:16.122 TEST_HEADER include/spdk/accel.h 00:03:16.122 TEST_HEADER include/spdk/assert.h 00:03:16.122 TEST_HEADER include/spdk/barrier.h 00:03:16.122 CC test/rpc_client/rpc_client_test.o 00:03:16.122 TEST_HEADER include/spdk/bdev.h 00:03:16.122 TEST_HEADER include/spdk/bdev_module.h 00:03:16.122 TEST_HEADER include/spdk/base64.h 00:03:16.122 TEST_HEADER include/spdk/bdev_zone.h 00:03:16.122 TEST_HEADER include/spdk/bit_pool.h 00:03:16.122 TEST_HEADER include/spdk/bit_array.h 00:03:16.122 CC app/spdk_top/spdk_top.o 00:03:16.122 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:16.122 TEST_HEADER include/spdk/blob_bdev.h 00:03:16.122 TEST_HEADER include/spdk/blob.h 00:03:16.122 TEST_HEADER include/spdk/blobfs.h 00:03:16.122 TEST_HEADER include/spdk/conf.h 00:03:16.122 TEST_HEADER include/spdk/config.h 00:03:16.122 TEST_HEADER include/spdk/cpuset.h 00:03:16.122 TEST_HEADER include/spdk/crc16.h 00:03:16.122 TEST_HEADER include/spdk/crc32.h 00:03:16.122 TEST_HEADER include/spdk/dif.h 00:03:16.122 TEST_HEADER include/spdk/crc64.h 00:03:16.122 TEST_HEADER include/spdk/endian.h 00:03:16.122 TEST_HEADER include/spdk/dma.h 00:03:16.122 TEST_HEADER include/spdk/env_dpdk.h 00:03:16.122 TEST_HEADER include/spdk/event.h 00:03:16.122 TEST_HEADER include/spdk/env.h 00:03:16.122 TEST_HEADER include/spdk/fd_group.h 00:03:16.122 TEST_HEADER include/spdk/fd.h 00:03:16.122 TEST_HEADER include/spdk/file.h 00:03:16.122 TEST_HEADER include/spdk/ftl.h 00:03:16.122 TEST_HEADER include/spdk/gpt_spec.h 00:03:16.122 TEST_HEADER include/spdk/hexlify.h 00:03:16.122 TEST_HEADER include/spdk/histogram_data.h 00:03:16.122 TEST_HEADER include/spdk/idxd.h 00:03:16.122 TEST_HEADER include/spdk/idxd_spec.h 00:03:16.122 TEST_HEADER include/spdk/ioat.h 00:03:16.122 TEST_HEADER include/spdk/init.h 00:03:16.122 CC app/vhost/vhost.o 00:03:16.122 TEST_HEADER include/spdk/ioat_spec.h 00:03:16.122 TEST_HEADER include/spdk/iscsi_spec.h 00:03:16.122 TEST_HEADER include/spdk/json.h 00:03:16.122 TEST_HEADER include/spdk/likely.h 00:03:16.122 TEST_HEADER include/spdk/jsonrpc.h 00:03:16.122 TEST_HEADER include/spdk/log.h 00:03:16.122 CC app/spdk_dd/spdk_dd.o 00:03:16.122 TEST_HEADER include/spdk/lvol.h 00:03:16.122 TEST_HEADER include/spdk/memory.h 00:03:16.122 TEST_HEADER include/spdk/mmio.h 00:03:16.122 TEST_HEADER include/spdk/nbd.h 00:03:16.122 TEST_HEADER include/spdk/nvme.h 00:03:16.122 TEST_HEADER include/spdk/notify.h 00:03:16.122 TEST_HEADER include/spdk/nvme_intel.h 00:03:16.122 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:16.122 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:16.122 TEST_HEADER include/spdk/nvme_spec.h 00:03:16.122 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:16.122 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:16.122 TEST_HEADER include/spdk/nvme_zns.h 00:03:16.122 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:16.122 TEST_HEADER include/spdk/nvmf.h 00:03:16.122 TEST_HEADER include/spdk/nvmf_spec.h 00:03:16.122 TEST_HEADER include/spdk/opal.h 00:03:16.122 TEST_HEADER include/spdk/nvmf_transport.h 00:03:16.122 TEST_HEADER include/spdk/opal_spec.h 00:03:16.122 TEST_HEADER include/spdk/pipe.h 00:03:16.122 TEST_HEADER include/spdk/pci_ids.h 00:03:16.122 CC app/iscsi_tgt/iscsi_tgt.o 00:03:16.123 TEST_HEADER include/spdk/queue.h 00:03:16.123 TEST_HEADER include/spdk/reduce.h 00:03:16.123 TEST_HEADER include/spdk/rpc.h 00:03:16.123 TEST_HEADER include/spdk/scheduler.h 00:03:16.123 TEST_HEADER include/spdk/scsi.h 00:03:16.123 TEST_HEADER include/spdk/scsi_spec.h 00:03:16.123 TEST_HEADER include/spdk/sock.h 00:03:16.123 TEST_HEADER include/spdk/stdinc.h 00:03:16.123 CC app/nvmf_tgt/nvmf_main.o 00:03:16.123 TEST_HEADER include/spdk/string.h 00:03:16.123 TEST_HEADER include/spdk/thread.h 00:03:16.123 TEST_HEADER include/spdk/trace.h 00:03:16.123 TEST_HEADER include/spdk/tree.h 00:03:16.123 TEST_HEADER include/spdk/trace_parser.h 00:03:16.123 TEST_HEADER include/spdk/ublk.h 00:03:16.123 TEST_HEADER include/spdk/util.h 00:03:16.123 TEST_HEADER include/spdk/uuid.h 00:03:16.123 TEST_HEADER include/spdk/version.h 00:03:16.123 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:16.123 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:16.123 TEST_HEADER include/spdk/vhost.h 00:03:16.123 TEST_HEADER include/spdk/vmd.h 00:03:16.123 TEST_HEADER include/spdk/zipf.h 00:03:16.123 TEST_HEADER include/spdk/xor.h 00:03:16.123 CXX test/cpp_headers/accel.o 00:03:16.123 CC app/spdk_tgt/spdk_tgt.o 00:03:16.123 CXX test/cpp_headers/barrier.o 00:03:16.123 CXX test/cpp_headers/accel_module.o 00:03:16.123 CXX test/cpp_headers/assert.o 00:03:16.123 CXX test/cpp_headers/base64.o 00:03:16.123 CXX test/cpp_headers/bdev.o 00:03:16.123 CXX test/cpp_headers/bdev_module.o 00:03:16.123 CXX test/cpp_headers/bdev_zone.o 00:03:16.123 CXX test/cpp_headers/bit_array.o 00:03:16.123 CXX test/cpp_headers/bit_pool.o 00:03:16.123 CXX test/cpp_headers/blobfs.o 00:03:16.123 CXX test/cpp_headers/blob_bdev.o 00:03:16.123 CXX test/cpp_headers/blob.o 00:03:16.123 CXX test/cpp_headers/blobfs_bdev.o 00:03:16.123 CXX test/cpp_headers/conf.o 00:03:16.123 CXX test/cpp_headers/config.o 00:03:16.123 CXX test/cpp_headers/cpuset.o 00:03:16.123 CXX test/cpp_headers/crc16.o 00:03:16.123 CXX test/cpp_headers/crc32.o 00:03:16.123 CXX test/cpp_headers/crc64.o 00:03:16.123 CXX test/cpp_headers/dif.o 00:03:16.123 CXX test/cpp_headers/dma.o 00:03:16.123 CXX test/cpp_headers/endian.o 00:03:16.123 CXX test/cpp_headers/env_dpdk.o 00:03:16.123 CXX test/cpp_headers/env.o 00:03:16.123 CXX test/cpp_headers/fd_group.o 00:03:16.123 CXX test/cpp_headers/event.o 00:03:16.123 CXX test/cpp_headers/file.o 00:03:16.123 CXX test/cpp_headers/fd.o 00:03:16.123 CXX test/cpp_headers/ftl.o 00:03:16.123 CXX test/cpp_headers/gpt_spec.o 00:03:16.123 CXX test/cpp_headers/hexlify.o 00:03:16.123 CXX test/cpp_headers/histogram_data.o 00:03:16.123 CXX test/cpp_headers/idxd.o 00:03:16.123 CXX test/cpp_headers/idxd_spec.o 00:03:16.123 CXX test/cpp_headers/init.o 00:03:16.123 CXX test/cpp_headers/ioat.o 00:03:16.123 CC examples/vmd/lsvmd/lsvmd.o 00:03:16.123 CC examples/vmd/led/led.o 00:03:16.123 CC examples/sock/hello_world/hello_sock.o 00:03:16.123 CC examples/nvme/hello_world/hello_world.o 00:03:16.123 CC test/env/vtophys/vtophys.o 00:03:16.123 CC examples/idxd/perf/perf.o 00:03:16.123 CC test/thread/poller_perf/poller_perf.o 00:03:16.123 CC test/env/pci/pci_ut.o 00:03:16.123 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:16.123 CC examples/nvme/arbitration/arbitration.o 00:03:16.123 CC examples/nvme/hotplug/hotplug.o 00:03:16.123 CC app/fio/nvme/fio_plugin.o 00:03:16.123 CC examples/util/zipf/zipf.o 00:03:16.123 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:16.123 CC test/nvme/err_injection/err_injection.o 00:03:16.123 CC test/event/event_perf/event_perf.o 00:03:16.123 CC examples/nvme/abort/abort.o 00:03:16.123 CC test/env/memory/memory_ut.o 00:03:16.123 CC test/nvme/startup/startup.o 00:03:16.123 CC test/app/jsoncat/jsoncat.o 00:03:16.123 CC examples/ioat/verify/verify.o 00:03:16.123 CC test/nvme/reset/reset.o 00:03:16.123 CC examples/nvme/reconnect/reconnect.o 00:03:16.123 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:16.123 CXX test/cpp_headers/ioat_spec.o 00:03:16.123 CC examples/bdev/hello_world/hello_bdev.o 00:03:16.123 CC examples/ioat/perf/perf.o 00:03:16.123 CC test/app/stub/stub.o 00:03:16.123 CC test/event/reactor_perf/reactor_perf.o 00:03:16.123 CC test/app/histogram_perf/histogram_perf.o 00:03:16.123 CC test/nvme/overhead/overhead.o 00:03:16.123 CC test/nvme/e2edp/nvme_dp.o 00:03:16.123 CC test/event/reactor/reactor.o 00:03:16.123 CC test/nvme/reserve/reserve.o 00:03:16.123 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:16.123 CC test/nvme/connect_stress/connect_stress.o 00:03:16.123 CC test/nvme/aer/aer.o 00:03:16.123 CC test/nvme/fused_ordering/fused_ordering.o 00:03:16.123 CC examples/bdev/bdevperf/bdevperf.o 00:03:16.123 CC test/nvme/sgl/sgl.o 00:03:16.123 CC test/nvme/cuse/cuse.o 00:03:16.123 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:16.123 CC test/nvme/compliance/nvme_compliance.o 00:03:16.123 CC test/dma/test_dma/test_dma.o 00:03:16.123 CC test/nvme/boot_partition/boot_partition.o 00:03:16.123 CC test/event/app_repeat/app_repeat.o 00:03:16.123 CC test/accel/dif/dif.o 00:03:16.123 CC test/nvme/simple_copy/simple_copy.o 00:03:16.394 CC test/nvme/fdp/fdp.o 00:03:16.394 CC examples/accel/perf/accel_perf.o 00:03:16.394 CC test/blobfs/mkfs/mkfs.o 00:03:16.394 CC examples/thread/thread/thread_ex.o 00:03:16.394 CC examples/nvmf/nvmf/nvmf.o 00:03:16.394 CC test/bdev/bdevio/bdevio.o 00:03:16.394 CC examples/blob/cli/blobcli.o 00:03:16.394 CC test/app/bdev_svc/bdev_svc.o 00:03:16.394 CC app/fio/bdev/fio_plugin.o 00:03:16.394 CC test/event/scheduler/scheduler.o 00:03:16.394 CC examples/blob/hello_world/hello_blob.o 00:03:16.394 LINK spdk_lspci 00:03:16.394 CC test/lvol/esnap/esnap.o 00:03:16.394 CC test/env/mem_callbacks/mem_callbacks.o 00:03:16.394 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:16.655 LINK spdk_nvme_discover 00:03:16.655 LINK rpc_client_test 00:03:16.655 LINK vhost 00:03:16.655 LINK interrupt_tgt 00:03:16.655 LINK poller_perf 00:03:16.655 LINK lsvmd 00:03:16.655 LINK nvmf_tgt 00:03:16.655 LINK led 00:03:16.655 LINK env_dpdk_post_init 00:03:16.655 LINK histogram_perf 00:03:16.655 LINK vtophys 00:03:16.655 LINK startup 00:03:16.655 LINK iscsi_tgt 00:03:16.655 LINK reactor_perf 00:03:16.655 LINK zipf 00:03:16.655 LINK spdk_trace_record 00:03:16.655 LINK err_injection 00:03:16.655 LINK event_perf 00:03:16.655 LINK spdk_tgt 00:03:16.655 LINK app_repeat 00:03:16.655 LINK reactor 00:03:16.655 LINK jsoncat 00:03:16.655 LINK pmr_persistence 00:03:16.655 LINK boot_partition 00:03:16.655 LINK connect_stress 00:03:16.655 CXX test/cpp_headers/iscsi_spec.o 00:03:16.655 LINK verify 00:03:16.655 CXX test/cpp_headers/json.o 00:03:16.655 LINK stub 00:03:16.655 CXX test/cpp_headers/jsonrpc.o 00:03:16.655 CXX test/cpp_headers/likely.o 00:03:16.655 LINK doorbell_aers 00:03:16.655 CXX test/cpp_headers/lvol.o 00:03:16.655 CXX test/cpp_headers/log.o 00:03:16.928 LINK hello_world 00:03:16.928 CXX test/cpp_headers/memory.o 00:03:16.928 CXX test/cpp_headers/mmio.o 00:03:16.928 CXX test/cpp_headers/nbd.o 00:03:16.928 CXX test/cpp_headers/notify.o 00:03:16.928 CXX test/cpp_headers/nvme.o 00:03:16.928 LINK bdev_svc 00:03:16.928 CXX test/cpp_headers/nvme_intel.o 00:03:16.928 LINK mkfs 00:03:16.928 LINK cmb_copy 00:03:16.928 LINK fused_ordering 00:03:16.928 CXX test/cpp_headers/nvme_ocssd.o 00:03:16.928 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:16.928 LINK reserve 00:03:16.928 CXX test/cpp_headers/nvme_spec.o 00:03:16.928 LINK hello_sock 00:03:16.928 CXX test/cpp_headers/nvme_zns.o 00:03:16.928 CXX test/cpp_headers/nvmf_cmd.o 00:03:16.928 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:16.928 CXX test/cpp_headers/nvmf.o 00:03:16.928 CXX test/cpp_headers/nvmf_spec.o 00:03:16.928 LINK ioat_perf 00:03:16.928 CXX test/cpp_headers/nvmf_transport.o 00:03:16.928 CXX test/cpp_headers/opal.o 00:03:16.928 CXX test/cpp_headers/opal_spec.o 00:03:16.928 LINK reset 00:03:16.928 CXX test/cpp_headers/pci_ids.o 00:03:16.928 LINK hotplug 00:03:16.928 CXX test/cpp_headers/pipe.o 00:03:16.928 CXX test/cpp_headers/queue.o 00:03:16.928 CXX test/cpp_headers/reduce.o 00:03:16.928 LINK hello_bdev 00:03:16.928 CXX test/cpp_headers/rpc.o 00:03:16.928 LINK simple_copy 00:03:16.928 CXX test/cpp_headers/scheduler.o 00:03:16.928 CXX test/cpp_headers/scsi.o 00:03:16.928 CXX test/cpp_headers/scsi_spec.o 00:03:16.928 CXX test/cpp_headers/stdinc.o 00:03:16.928 CXX test/cpp_headers/sock.o 00:03:16.928 CXX test/cpp_headers/string.o 00:03:16.928 CXX test/cpp_headers/thread.o 00:03:16.928 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:16.928 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:16.928 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:16.928 LINK overhead 00:03:16.928 LINK thread 00:03:16.928 LINK nvme_dp 00:03:16.928 CXX test/cpp_headers/trace.o 00:03:16.928 LINK spdk_dd 00:03:16.928 LINK scheduler 00:03:16.928 CXX test/cpp_headers/trace_parser.o 00:03:16.928 LINK aer 00:03:16.928 LINK sgl 00:03:16.928 LINK hello_blob 00:03:16.928 LINK mem_callbacks 00:03:16.928 CXX test/cpp_headers/tree.o 00:03:16.928 LINK nvmf 00:03:16.928 CXX test/cpp_headers/ublk.o 00:03:16.928 LINK arbitration 00:03:16.928 CXX test/cpp_headers/util.o 00:03:16.928 LINK nvme_compliance 00:03:16.928 LINK reconnect 00:03:16.928 LINK spdk_trace 00:03:16.928 LINK fdp 00:03:16.928 LINK idxd_perf 00:03:16.928 LINK pci_ut 00:03:16.928 CXX test/cpp_headers/uuid.o 00:03:17.188 CXX test/cpp_headers/version.o 00:03:17.188 CXX test/cpp_headers/vfio_user_pci.o 00:03:17.188 CXX test/cpp_headers/vfio_user_spec.o 00:03:17.188 CXX test/cpp_headers/vhost.o 00:03:17.188 CXX test/cpp_headers/vmd.o 00:03:17.188 CXX test/cpp_headers/xor.o 00:03:17.188 CXX test/cpp_headers/zipf.o 00:03:17.188 LINK dif 00:03:17.188 LINK abort 00:03:17.188 LINK test_dma 00:03:17.188 LINK accel_perf 00:03:17.188 LINK bdevio 00:03:17.188 LINK nvme_fuzz 00:03:17.188 LINK memory_ut 00:03:17.188 LINK nvme_manage 00:03:17.446 LINK blobcli 00:03:17.446 LINK spdk_nvme 00:03:17.446 LINK spdk_bdev 00:03:17.446 LINK spdk_nvme_identify 00:03:17.446 LINK spdk_nvme_perf 00:03:17.446 LINK vhost_fuzz 00:03:17.446 LINK spdk_top 00:03:17.446 LINK bdevperf 00:03:17.705 LINK cuse 00:03:18.274 LINK iscsi_fuzz 00:03:20.179 LINK esnap 00:03:20.179 00:03:20.179 real 0m30.516s 00:03:20.179 user 4m49.694s 00:03:20.179 sys 2m35.514s 00:03:20.179 15:53:50 -- common/autotest_common.sh@1115 -- $ xtrace_disable 00:03:20.179 15:53:50 -- common/autotest_common.sh@10 -- $ set +x 00:03:20.179 ************************************ 00:03:20.180 END TEST make 00:03:20.180 ************************************ 00:03:20.445 15:53:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:20.445 15:53:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:20.445 15:53:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:20.445 15:53:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:20.445 15:53:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:20.445 15:53:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:20.445 15:53:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:20.445 15:53:51 -- scripts/common.sh@335 -- # IFS=.-: 00:03:20.445 15:53:51 -- scripts/common.sh@335 -- # read -ra ver1 00:03:20.445 15:53:51 -- scripts/common.sh@336 -- # IFS=.-: 00:03:20.445 15:53:51 -- scripts/common.sh@336 -- # read -ra ver2 00:03:20.445 15:53:51 -- scripts/common.sh@337 -- # local 'op=<' 00:03:20.445 15:53:51 -- scripts/common.sh@339 -- # ver1_l=2 00:03:20.445 15:53:51 -- scripts/common.sh@340 -- # ver2_l=1 00:03:20.445 15:53:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:20.445 15:53:51 -- scripts/common.sh@343 -- # case "$op" in 00:03:20.445 15:53:51 -- scripts/common.sh@344 -- # : 1 00:03:20.445 15:53:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:20.445 15:53:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:20.445 15:53:51 -- scripts/common.sh@364 -- # decimal 1 00:03:20.445 15:53:51 -- scripts/common.sh@352 -- # local d=1 00:03:20.445 15:53:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:20.445 15:53:51 -- scripts/common.sh@354 -- # echo 1 00:03:20.445 15:53:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:20.445 15:53:51 -- scripts/common.sh@365 -- # decimal 2 00:03:20.445 15:53:51 -- scripts/common.sh@352 -- # local d=2 00:03:20.445 15:53:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:20.445 15:53:51 -- scripts/common.sh@354 -- # echo 2 00:03:20.445 15:53:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:20.445 15:53:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:20.445 15:53:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:20.445 15:53:51 -- scripts/common.sh@367 -- # return 0 00:03:20.445 15:53:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:20.445 15:53:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:20.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.446 --rc genhtml_branch_coverage=1 00:03:20.446 --rc genhtml_function_coverage=1 00:03:20.446 --rc genhtml_legend=1 00:03:20.446 --rc geninfo_all_blocks=1 00:03:20.446 --rc geninfo_unexecuted_blocks=1 00:03:20.446 00:03:20.446 ' 00:03:20.446 15:53:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:20.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.446 --rc genhtml_branch_coverage=1 00:03:20.446 --rc genhtml_function_coverage=1 00:03:20.446 --rc genhtml_legend=1 00:03:20.446 --rc geninfo_all_blocks=1 00:03:20.446 --rc geninfo_unexecuted_blocks=1 00:03:20.446 00:03:20.446 ' 00:03:20.446 15:53:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:20.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.446 --rc genhtml_branch_coverage=1 00:03:20.446 --rc genhtml_function_coverage=1 00:03:20.446 --rc genhtml_legend=1 00:03:20.446 --rc geninfo_all_blocks=1 00:03:20.446 --rc geninfo_unexecuted_blocks=1 00:03:20.446 00:03:20.446 ' 00:03:20.446 15:53:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:20.446 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:20.446 --rc genhtml_branch_coverage=1 00:03:20.446 --rc genhtml_function_coverage=1 00:03:20.446 --rc genhtml_legend=1 00:03:20.446 --rc geninfo_all_blocks=1 00:03:20.446 --rc geninfo_unexecuted_blocks=1 00:03:20.446 00:03:20.446 ' 00:03:20.446 15:53:51 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:03:20.446 15:53:51 -- nvmf/common.sh@7 -- # uname -s 00:03:20.446 15:53:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:20.446 15:53:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:20.446 15:53:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:20.446 15:53:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:20.446 15:53:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:20.446 15:53:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:20.446 15:53:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:20.446 15:53:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:20.446 15:53:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:20.447 15:53:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:20.447 15:53:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:03:20.447 15:53:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:03:20.447 15:53:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:20.447 15:53:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:20.447 15:53:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:20.447 15:53:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:03:20.447 15:53:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:20.447 15:53:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:20.447 15:53:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:20.447 15:53:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.447 15:53:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.447 15:53:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.447 15:53:51 -- paths/export.sh@5 -- # export PATH 00:03:20.447 15:53:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:20.447 15:53:51 -- nvmf/common.sh@46 -- # : 0 00:03:20.447 15:53:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:03:20.447 15:53:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:03:20.447 15:53:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:03:20.448 15:53:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:20.448 15:53:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:20.448 15:53:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:03:20.448 15:53:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:03:20.448 15:53:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:03:20.448 15:53:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:20.448 15:53:51 -- spdk/autotest.sh@32 -- # uname -s 00:03:20.448 15:53:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:20.448 15:53:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:20.448 15:53:51 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:20.448 15:53:51 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:20.448 15:53:51 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/coredumps 00:03:20.448 15:53:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:20.448 15:53:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:20.448 15:53:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:20.448 15:53:51 -- spdk/autotest.sh@48 -- # udevadm_pid=1123448 00:03:20.448 15:53:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:20.448 15:53:51 -- spdk/autotest.sh@51 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:20.448 15:53:51 -- spdk/autotest.sh@54 -- # echo 1123450 00:03:20.448 15:53:51 -- spdk/autotest.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:20.448 15:53:51 -- spdk/autotest.sh@56 -- # echo 1123451 00:03:20.448 15:53:51 -- spdk/autotest.sh@58 -- # [[ ............................... != QEMU ]] 00:03:20.448 15:53:51 -- spdk/autotest.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power 00:03:20.448 15:53:51 -- spdk/autotest.sh@60 -- # echo 1123452 00:03:20.448 15:53:51 -- spdk/autotest.sh@59 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:03:20.448 15:53:51 -- spdk/autotest.sh@62 -- # echo 1123453 00:03:20.448 15:53:51 -- spdk/autotest.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power -l 00:03:20.449 15:53:51 -- spdk/autotest.sh@66 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:20.449 15:53:51 -- spdk/autotest.sh@68 -- # timing_enter autotest 00:03:20.714 15:53:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:20.714 15:53:51 -- common/autotest_common.sh@10 -- # set +x 00:03:20.714 15:53:51 -- spdk/autotest.sh@70 -- # create_test_list 00:03:20.714 15:53:51 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:20.714 15:53:51 -- common/autotest_common.sh@10 -- # set +x 00:03:20.714 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-bmc-pm.bmc.pm.log 00:03:20.714 Redirecting to /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/power/collect-cpu-temp.pm.log 00:03:20.714 15:53:51 -- spdk/autotest.sh@72 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/autotest.sh 00:03:20.714 15:53:51 -- spdk/autotest.sh@72 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:20.714 15:53:51 -- spdk/autotest.sh@72 -- # src=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:20.714 15:53:51 -- spdk/autotest.sh@73 -- # out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:03:20.714 15:53:51 -- spdk/autotest.sh@74 -- # cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:03:20.714 15:53:51 -- spdk/autotest.sh@76 -- # freebsd_update_contigmem_mod 00:03:20.714 15:53:51 -- common/autotest_common.sh@1450 -- # uname 00:03:20.714 15:53:51 -- common/autotest_common.sh@1450 -- # '[' Linux = FreeBSD ']' 00:03:20.714 15:53:51 -- spdk/autotest.sh@77 -- # freebsd_set_maxsock_buf 00:03:20.714 15:53:51 -- common/autotest_common.sh@1470 -- # uname 00:03:20.714 15:53:51 -- common/autotest_common.sh@1470 -- # [[ Linux = FreeBSD ]] 00:03:20.714 15:53:51 -- spdk/autotest.sh@79 -- # [[ y == y ]] 00:03:20.714 15:53:51 -- spdk/autotest.sh@81 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:20.714 lcov: LCOV version 1.15 00:03:20.714 15:53:51 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info 00:03:23.252 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:03:23.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:03:23.252 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:03:23.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:03:23.252 /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:03:23.252 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:03:45.183 15:54:13 -- spdk/autotest.sh@87 -- # timing_enter pre_cleanup 00:03:45.183 15:54:13 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:45.183 15:54:13 -- common/autotest_common.sh@10 -- # set +x 00:03:45.183 15:54:13 -- spdk/autotest.sh@89 -- # rm -f 00:03:45.183 15:54:13 -- spdk/autotest.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.564 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:46.564 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:46.825 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:46.825 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:46.825 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:46.825 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:46.825 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:46.825 0000:d8:00.0 (8086 0a54): Already using the nvme driver 00:03:46.825 15:54:17 -- spdk/autotest.sh@94 -- # get_zoned_devs 00:03:46.825 15:54:17 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:46.825 15:54:17 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:46.825 15:54:17 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:46.825 15:54:17 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:46.825 15:54:17 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:46.825 15:54:17 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:46.825 15:54:17 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:46.825 15:54:17 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:46.825 15:54:17 -- spdk/autotest.sh@96 -- # (( 0 > 0 )) 00:03:46.825 15:54:17 -- spdk/autotest.sh@108 -- # ls /dev/nvme0n1 00:03:46.825 15:54:17 -- spdk/autotest.sh@108 -- # grep -v p 00:03:46.825 15:54:17 -- spdk/autotest.sh@108 -- # for dev in $(ls /dev/nvme*n* | grep -v p || true) 00:03:46.825 15:54:17 -- spdk/autotest.sh@110 -- # [[ -z '' ]] 00:03:46.825 15:54:17 -- spdk/autotest.sh@111 -- # block_in_use /dev/nvme0n1 00:03:46.825 15:54:17 -- scripts/common.sh@380 -- # local block=/dev/nvme0n1 pt 00:03:46.825 15:54:17 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:46.825 No valid GPT data, bailing 00:03:46.825 15:54:17 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:46.825 15:54:17 -- scripts/common.sh@393 -- # pt= 00:03:46.825 15:54:17 -- scripts/common.sh@394 -- # return 1 00:03:46.825 15:54:17 -- spdk/autotest.sh@112 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:46.825 1+0 records in 00:03:46.825 1+0 records out 00:03:46.825 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0051572 s, 203 MB/s 00:03:46.825 15:54:17 -- spdk/autotest.sh@116 -- # sync 00:03:46.825 15:54:17 -- spdk/autotest.sh@118 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:46.825 15:54:17 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:46.825 15:54:17 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:54.951 15:54:24 -- spdk/autotest.sh@122 -- # uname -s 00:03:54.951 15:54:24 -- spdk/autotest.sh@122 -- # '[' Linux = Linux ']' 00:03:54.951 15:54:24 -- spdk/autotest.sh@123 -- # run_test setup.sh /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:54.951 15:54:24 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:54.951 15:54:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:54.951 15:54:24 -- common/autotest_common.sh@10 -- # set +x 00:03:54.951 ************************************ 00:03:54.951 START TEST setup.sh 00:03:54.951 ************************************ 00:03:54.951 15:54:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/test-setup.sh 00:03:54.951 * Looking for test storage... 00:03:54.951 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:54.951 15:54:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:54.951 15:54:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:54.951 15:54:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:54.951 15:54:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:54.951 15:54:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:54.951 15:54:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:54.951 15:54:25 -- scripts/common.sh@335 -- # IFS=.-: 00:03:54.951 15:54:25 -- scripts/common.sh@335 -- # read -ra ver1 00:03:54.951 15:54:25 -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.951 15:54:25 -- scripts/common.sh@336 -- # read -ra ver2 00:03:54.951 15:54:25 -- scripts/common.sh@337 -- # local 'op=<' 00:03:54.951 15:54:25 -- scripts/common.sh@339 -- # ver1_l=2 00:03:54.951 15:54:25 -- scripts/common.sh@340 -- # ver2_l=1 00:03:54.951 15:54:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:54.951 15:54:25 -- scripts/common.sh@343 -- # case "$op" in 00:03:54.951 15:54:25 -- scripts/common.sh@344 -- # : 1 00:03:54.951 15:54:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:54.951 15:54:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.951 15:54:25 -- scripts/common.sh@364 -- # decimal 1 00:03:54.951 15:54:25 -- scripts/common.sh@352 -- # local d=1 00:03:54.951 15:54:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.951 15:54:25 -- scripts/common.sh@354 -- # echo 1 00:03:54.951 15:54:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:54.951 15:54:25 -- scripts/common.sh@365 -- # decimal 2 00:03:54.951 15:54:25 -- scripts/common.sh@352 -- # local d=2 00:03:54.951 15:54:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.951 15:54:25 -- scripts/common.sh@354 -- # echo 2 00:03:54.951 15:54:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:54.951 15:54:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:54.951 15:54:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:54.951 15:54:25 -- scripts/common.sh@367 -- # return 0 00:03:54.951 15:54:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:54.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.951 --rc genhtml_branch_coverage=1 00:03:54.951 --rc genhtml_function_coverage=1 00:03:54.951 --rc genhtml_legend=1 00:03:54.951 --rc geninfo_all_blocks=1 00:03:54.951 --rc geninfo_unexecuted_blocks=1 00:03:54.951 00:03:54.951 ' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:54.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.951 --rc genhtml_branch_coverage=1 00:03:54.951 --rc genhtml_function_coverage=1 00:03:54.951 --rc genhtml_legend=1 00:03:54.951 --rc geninfo_all_blocks=1 00:03:54.951 --rc geninfo_unexecuted_blocks=1 00:03:54.951 00:03:54.951 ' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:54.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.951 --rc genhtml_branch_coverage=1 00:03:54.951 --rc genhtml_function_coverage=1 00:03:54.951 --rc genhtml_legend=1 00:03:54.951 --rc geninfo_all_blocks=1 00:03:54.951 --rc geninfo_unexecuted_blocks=1 00:03:54.951 00:03:54.951 ' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:54.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.951 --rc genhtml_branch_coverage=1 00:03:54.951 --rc genhtml_function_coverage=1 00:03:54.951 --rc genhtml_legend=1 00:03:54.951 --rc geninfo_all_blocks=1 00:03:54.951 --rc geninfo_unexecuted_blocks=1 00:03:54.951 00:03:54.951 ' 00:03:54.951 15:54:25 -- setup/test-setup.sh@10 -- # uname -s 00:03:54.951 15:54:25 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:54.951 15:54:25 -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:54.951 15:54:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:03:54.951 15:54:25 -- common/autotest_common.sh@10 -- # set +x 00:03:54.951 ************************************ 00:03:54.951 START TEST acl 00:03:54.951 ************************************ 00:03:54.951 15:54:25 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/acl.sh 00:03:54.951 * Looking for test storage... 00:03:54.951 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:03:54.951 15:54:25 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:03:54.951 15:54:25 -- common/autotest_common.sh@1690 -- # lcov --version 00:03:54.951 15:54:25 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:03:54.951 15:54:25 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:03:54.951 15:54:25 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:03:54.951 15:54:25 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:03:54.951 15:54:25 -- scripts/common.sh@335 -- # IFS=.-: 00:03:54.951 15:54:25 -- scripts/common.sh@335 -- # read -ra ver1 00:03:54.951 15:54:25 -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.951 15:54:25 -- scripts/common.sh@336 -- # read -ra ver2 00:03:54.951 15:54:25 -- scripts/common.sh@337 -- # local 'op=<' 00:03:54.951 15:54:25 -- scripts/common.sh@339 -- # ver1_l=2 00:03:54.951 15:54:25 -- scripts/common.sh@340 -- # ver2_l=1 00:03:54.951 15:54:25 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:03:54.951 15:54:25 -- scripts/common.sh@343 -- # case "$op" in 00:03:54.951 15:54:25 -- scripts/common.sh@344 -- # : 1 00:03:54.951 15:54:25 -- scripts/common.sh@363 -- # (( v = 0 )) 00:03:54.951 15:54:25 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.951 15:54:25 -- scripts/common.sh@364 -- # decimal 1 00:03:54.951 15:54:25 -- scripts/common.sh@352 -- # local d=1 00:03:54.951 15:54:25 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.951 15:54:25 -- scripts/common.sh@354 -- # echo 1 00:03:54.951 15:54:25 -- scripts/common.sh@364 -- # ver1[v]=1 00:03:54.951 15:54:25 -- scripts/common.sh@365 -- # decimal 2 00:03:54.951 15:54:25 -- scripts/common.sh@352 -- # local d=2 00:03:54.951 15:54:25 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.951 15:54:25 -- scripts/common.sh@354 -- # echo 2 00:03:54.951 15:54:25 -- scripts/common.sh@365 -- # ver2[v]=2 00:03:54.951 15:54:25 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:03:54.951 15:54:25 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:03:54.951 15:54:25 -- scripts/common.sh@367 -- # return 0 00:03:54.951 15:54:25 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:03:54.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.951 --rc genhtml_branch_coverage=1 00:03:54.951 --rc genhtml_function_coverage=1 00:03:54.951 --rc genhtml_legend=1 00:03:54.951 --rc geninfo_all_blocks=1 00:03:54.951 --rc geninfo_unexecuted_blocks=1 00:03:54.951 00:03:54.951 ' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:03:54.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.951 --rc genhtml_branch_coverage=1 00:03:54.951 --rc genhtml_function_coverage=1 00:03:54.951 --rc genhtml_legend=1 00:03:54.951 --rc geninfo_all_blocks=1 00:03:54.951 --rc geninfo_unexecuted_blocks=1 00:03:54.951 00:03:54.951 ' 00:03:54.951 15:54:25 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:03:54.951 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.951 --rc genhtml_branch_coverage=1 00:03:54.951 --rc genhtml_function_coverage=1 00:03:54.951 --rc genhtml_legend=1 00:03:54.951 --rc geninfo_all_blocks=1 00:03:54.952 --rc geninfo_unexecuted_blocks=1 00:03:54.952 00:03:54.952 ' 00:03:54.952 15:54:25 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:03:54.952 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.952 --rc genhtml_branch_coverage=1 00:03:54.952 --rc genhtml_function_coverage=1 00:03:54.952 --rc genhtml_legend=1 00:03:54.952 --rc geninfo_all_blocks=1 00:03:54.952 --rc geninfo_unexecuted_blocks=1 00:03:54.952 00:03:54.952 ' 00:03:54.952 15:54:25 -- setup/acl.sh@10 -- # get_zoned_devs 00:03:54.952 15:54:25 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:03:54.952 15:54:25 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:03:54.952 15:54:25 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:03:54.952 15:54:25 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:03:54.952 15:54:25 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:03:54.952 15:54:25 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:03:54.952 15:54:25 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:54.952 15:54:25 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:03:54.952 15:54:25 -- setup/acl.sh@12 -- # devs=() 00:03:54.952 15:54:25 -- setup/acl.sh@12 -- # declare -a devs 00:03:54.952 15:54:25 -- setup/acl.sh@13 -- # drivers=() 00:03:54.952 15:54:25 -- setup/acl.sh@13 -- # declare -A drivers 00:03:54.952 15:54:25 -- setup/acl.sh@51 -- # setup reset 00:03:54.952 15:54:25 -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:54.952 15:54:25 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:03:59.150 15:54:29 -- setup/acl.sh@52 -- # collect_setup_devs 00:03:59.151 15:54:29 -- setup/acl.sh@16 -- # local dev driver 00:03:59.151 15:54:29 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:59.151 15:54:29 -- setup/acl.sh@15 -- # setup output status 00:03:59.151 15:54:29 -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.151 15:54:29 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:04:01.773 Hugepages 00:04:01.773 node hugesize free / total 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # continue 00:04:01.773 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # continue 00:04:01.773 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # continue 00:04:01.773 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.773 00:04:01.773 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # continue 00:04:01.773 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:01.773 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.773 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.773 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:01.773 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.773 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.773 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:01.773 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.773 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.773 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.773 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:01.774 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:01.774 15:54:32 -- setup/acl.sh@20 -- # continue 00:04:01.774 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.034 15:54:32 -- setup/acl.sh@19 -- # [[ 0000:d8:00.0 == *:*:*.* ]] 00:04:02.034 15:54:32 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:02.034 15:54:32 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:04:02.034 15:54:32 -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:02.034 15:54:32 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:02.034 15:54:32 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:02.034 15:54:32 -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:02.034 15:54:32 -- setup/acl.sh@54 -- # run_test denied denied 00:04:02.034 15:54:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:02.034 15:54:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:02.034 15:54:32 -- common/autotest_common.sh@10 -- # set +x 00:04:02.034 ************************************ 00:04:02.034 START TEST denied 00:04:02.034 ************************************ 00:04:02.034 15:54:32 -- common/autotest_common.sh@1114 -- # denied 00:04:02.034 15:54:32 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:d8:00.0' 00:04:02.034 15:54:32 -- setup/acl.sh@38 -- # setup output config 00:04:02.034 15:54:32 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:d8:00.0' 00:04:02.034 15:54:32 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.034 15:54:32 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:06.230 0000:d8:00.0 (8086 0a54): Skipping denied controller at 0000:d8:00.0 00:04:06.230 15:54:36 -- setup/acl.sh@40 -- # verify 0000:d8:00.0 00:04:06.230 15:54:36 -- setup/acl.sh@28 -- # local dev driver 00:04:06.230 15:54:36 -- setup/acl.sh@30 -- # for dev in "$@" 00:04:06.230 15:54:36 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:d8:00.0 ]] 00:04:06.230 15:54:36 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:d8:00.0/driver 00:04:06.230 15:54:36 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:06.230 15:54:36 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:06.230 15:54:36 -- setup/acl.sh@41 -- # setup reset 00:04:06.230 15:54:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:06.230 15:54:36 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:11.506 00:04:11.506 real 0m8.687s 00:04:11.506 user 0m2.670s 00:04:11.506 sys 0m5.327s 00:04:11.506 15:54:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:11.506 15:54:41 -- common/autotest_common.sh@10 -- # set +x 00:04:11.506 ************************************ 00:04:11.506 END TEST denied 00:04:11.506 ************************************ 00:04:11.506 15:54:41 -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:11.506 15:54:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:11.506 15:54:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:11.506 15:54:41 -- common/autotest_common.sh@10 -- # set +x 00:04:11.506 ************************************ 00:04:11.506 START TEST allowed 00:04:11.506 ************************************ 00:04:11.506 15:54:41 -- common/autotest_common.sh@1114 -- # allowed 00:04:11.506 15:54:41 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:d8:00.0 00:04:11.506 15:54:41 -- setup/acl.sh@45 -- # setup output config 00:04:11.506 15:54:41 -- setup/acl.sh@46 -- # grep -E '0000:d8:00.0 .*: nvme -> .*' 00:04:11.506 15:54:41 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.506 15:54:41 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:16.786 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:16.786 15:54:46 -- setup/acl.sh@47 -- # verify 00:04:16.786 15:54:46 -- setup/acl.sh@28 -- # local dev driver 00:04:16.786 15:54:46 -- setup/acl.sh@48 -- # setup reset 00:04:16.786 15:54:46 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:16.786 15:54:46 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.983 00:04:20.983 real 0m9.660s 00:04:20.983 user 0m2.624s 00:04:20.983 sys 0m5.240s 00:04:20.983 15:54:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.983 15:54:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.983 ************************************ 00:04:20.983 END TEST allowed 00:04:20.983 ************************************ 00:04:20.983 00:04:20.983 real 0m26.086s 00:04:20.983 user 0m8.115s 00:04:20.983 sys 0m15.799s 00:04:20.983 15:54:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:20.983 15:54:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.983 ************************************ 00:04:20.983 END TEST acl 00:04:20.983 ************************************ 00:04:20.983 15:54:51 -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:20.983 15:54:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.983 15:54:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.983 15:54:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.983 ************************************ 00:04:20.983 START TEST hugepages 00:04:20.983 ************************************ 00:04:20.983 15:54:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/hugepages.sh 00:04:20.983 * Looking for test storage... 00:04:20.983 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:20.983 15:54:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:20.983 15:54:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:20.983 15:54:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:20.983 15:54:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:20.983 15:54:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:20.983 15:54:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:20.983 15:54:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:20.983 15:54:51 -- scripts/common.sh@335 -- # IFS=.-: 00:04:20.983 15:54:51 -- scripts/common.sh@335 -- # read -ra ver1 00:04:20.983 15:54:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:20.983 15:54:51 -- scripts/common.sh@336 -- # read -ra ver2 00:04:20.983 15:54:51 -- scripts/common.sh@337 -- # local 'op=<' 00:04:20.983 15:54:51 -- scripts/common.sh@339 -- # ver1_l=2 00:04:20.983 15:54:51 -- scripts/common.sh@340 -- # ver2_l=1 00:04:20.983 15:54:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:20.983 15:54:51 -- scripts/common.sh@343 -- # case "$op" in 00:04:20.983 15:54:51 -- scripts/common.sh@344 -- # : 1 00:04:20.983 15:54:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:20.983 15:54:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:20.983 15:54:51 -- scripts/common.sh@364 -- # decimal 1 00:04:20.983 15:54:51 -- scripts/common.sh@352 -- # local d=1 00:04:20.984 15:54:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:20.984 15:54:51 -- scripts/common.sh@354 -- # echo 1 00:04:20.984 15:54:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:20.984 15:54:51 -- scripts/common.sh@365 -- # decimal 2 00:04:20.984 15:54:51 -- scripts/common.sh@352 -- # local d=2 00:04:20.984 15:54:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:20.984 15:54:51 -- scripts/common.sh@354 -- # echo 2 00:04:20.984 15:54:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:20.984 15:54:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:20.984 15:54:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:20.984 15:54:51 -- scripts/common.sh@367 -- # return 0 00:04:20.984 15:54:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:20.984 15:54:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:20.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.984 --rc genhtml_branch_coverage=1 00:04:20.984 --rc genhtml_function_coverage=1 00:04:20.984 --rc genhtml_legend=1 00:04:20.984 --rc geninfo_all_blocks=1 00:04:20.984 --rc geninfo_unexecuted_blocks=1 00:04:20.984 00:04:20.984 ' 00:04:20.984 15:54:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:20.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.984 --rc genhtml_branch_coverage=1 00:04:20.984 --rc genhtml_function_coverage=1 00:04:20.984 --rc genhtml_legend=1 00:04:20.984 --rc geninfo_all_blocks=1 00:04:20.984 --rc geninfo_unexecuted_blocks=1 00:04:20.984 00:04:20.984 ' 00:04:20.984 15:54:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:20.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.984 --rc genhtml_branch_coverage=1 00:04:20.984 --rc genhtml_function_coverage=1 00:04:20.984 --rc genhtml_legend=1 00:04:20.984 --rc geninfo_all_blocks=1 00:04:20.984 --rc geninfo_unexecuted_blocks=1 00:04:20.984 00:04:20.984 ' 00:04:20.984 15:54:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:20.984 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:20.984 --rc genhtml_branch_coverage=1 00:04:20.984 --rc genhtml_function_coverage=1 00:04:20.984 --rc genhtml_legend=1 00:04:20.984 --rc geninfo_all_blocks=1 00:04:20.984 --rc geninfo_unexecuted_blocks=1 00:04:20.984 00:04:20.984 ' 00:04:20.984 15:54:51 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:20.984 15:54:51 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:20.984 15:54:51 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:20.984 15:54:51 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:20.984 15:54:51 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:20.984 15:54:51 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:20.984 15:54:51 -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:20.984 15:54:51 -- setup/common.sh@18 -- # local node= 00:04:20.984 15:54:51 -- setup/common.sh@19 -- # local var val 00:04:20.984 15:54:51 -- setup/common.sh@20 -- # local mem_f mem 00:04:20.984 15:54:51 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:20.984 15:54:51 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:20.984 15:54:51 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:20.984 15:54:51 -- setup/common.sh@28 -- # mapfile -t mem 00:04:20.984 15:54:51 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 41103844 kB' 'MemAvailable: 44808732 kB' 'Buffers: 4100 kB' 'Cached: 10858308 kB' 'SwapCached: 0 kB' 'Active: 7620848 kB' 'Inactive: 3692420 kB' 'Active(anon): 7232084 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454208 kB' 'Mapped: 186640 kB' 'Shmem: 6781224 kB' 'KReclaimable: 241140 kB' 'Slab: 1014908 kB' 'SReclaimable: 241140 kB' 'SUnreclaim: 773768 kB' 'KernelStack: 21968 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36433340 kB' 'Committed_AS: 8450468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217772 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.984 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.984 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.985 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.985 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # continue 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # IFS=': ' 00:04:20.986 15:54:51 -- setup/common.sh@31 -- # read -r var val _ 00:04:20.986 15:54:51 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:20.986 15:54:51 -- setup/common.sh@33 -- # echo 2048 00:04:20.986 15:54:51 -- setup/common.sh@33 -- # return 0 00:04:20.986 15:54:51 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:20.986 15:54:51 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:20.986 15:54:51 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:20.986 15:54:51 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:20.986 15:54:51 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:20.986 15:54:51 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:20.986 15:54:51 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:20.986 15:54:51 -- setup/hugepages.sh@207 -- # get_nodes 00:04:20.986 15:54:51 -- setup/hugepages.sh@27 -- # local node 00:04:20.986 15:54:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.986 15:54:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:20.986 15:54:51 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:20.986 15:54:51 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:20.986 15:54:51 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:20.986 15:54:51 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:20.986 15:54:51 -- setup/hugepages.sh@208 -- # clear_hp 00:04:20.986 15:54:51 -- setup/hugepages.sh@37 -- # local node hp 00:04:20.986 15:54:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.986 15:54:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.986 15:54:51 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.986 15:54:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.986 15:54:51 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.986 15:54:51 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:20.986 15:54:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.986 15:54:51 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.986 15:54:51 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:20.986 15:54:51 -- setup/hugepages.sh@41 -- # echo 0 00:04:20.986 15:54:51 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:20.986 15:54:51 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:20.986 15:54:51 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:20.986 15:54:51 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:20.986 15:54:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:20.986 15:54:51 -- common/autotest_common.sh@10 -- # set +x 00:04:20.986 ************************************ 00:04:20.986 START TEST default_setup 00:04:20.986 ************************************ 00:04:20.986 15:54:51 -- common/autotest_common.sh@1114 -- # default_setup 00:04:20.986 15:54:51 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:20.986 15:54:51 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:20.986 15:54:51 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:20.986 15:54:51 -- setup/hugepages.sh@51 -- # shift 00:04:20.986 15:54:51 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:20.986 15:54:51 -- setup/hugepages.sh@52 -- # local node_ids 00:04:20.986 15:54:51 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:20.986 15:54:51 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:20.986 15:54:51 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:20.986 15:54:51 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:20.986 15:54:51 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:20.986 15:54:51 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:20.986 15:54:51 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:20.986 15:54:51 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:20.986 15:54:51 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:20.986 15:54:51 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:20.986 15:54:51 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:20.986 15:54:51 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:20.986 15:54:51 -- setup/hugepages.sh@73 -- # return 0 00:04:20.986 15:54:51 -- setup/hugepages.sh@137 -- # setup output 00:04:20.986 15:54:51 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.986 15:54:51 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:24.280 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:24.281 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:26.823 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:04:26.823 15:54:57 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:26.823 15:54:57 -- setup/hugepages.sh@89 -- # local node 00:04:26.823 15:54:57 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:26.823 15:54:57 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:26.823 15:54:57 -- setup/hugepages.sh@92 -- # local surp 00:04:26.823 15:54:57 -- setup/hugepages.sh@93 -- # local resv 00:04:26.823 15:54:57 -- setup/hugepages.sh@94 -- # local anon 00:04:26.823 15:54:57 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:26.824 15:54:57 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:26.824 15:54:57 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:26.824 15:54:57 -- setup/common.sh@18 -- # local node= 00:04:26.824 15:54:57 -- setup/common.sh@19 -- # local var val 00:04:26.824 15:54:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.824 15:54:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.824 15:54:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.824 15:54:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.824 15:54:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.824 15:54:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43299184 kB' 'MemAvailable: 47003868 kB' 'Buffers: 4100 kB' 'Cached: 10858980 kB' 'SwapCached: 0 kB' 'Active: 7623720 kB' 'Inactive: 3692420 kB' 'Active(anon): 7234956 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456392 kB' 'Mapped: 187728 kB' 'Shmem: 6781896 kB' 'KReclaimable: 240736 kB' 'Slab: 1013612 kB' 'SReclaimable: 240736 kB' 'SUnreclaim: 772876 kB' 'KernelStack: 22016 kB' 'PageTables: 7936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8487288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.824 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.824 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:26.825 15:54:57 -- setup/common.sh@33 -- # echo 0 00:04:26.825 15:54:57 -- setup/common.sh@33 -- # return 0 00:04:26.825 15:54:57 -- setup/hugepages.sh@97 -- # anon=0 00:04:26.825 15:54:57 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:26.825 15:54:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.825 15:54:57 -- setup/common.sh@18 -- # local node= 00:04:26.825 15:54:57 -- setup/common.sh@19 -- # local var val 00:04:26.825 15:54:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.825 15:54:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.825 15:54:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.825 15:54:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.825 15:54:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.825 15:54:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43303816 kB' 'MemAvailable: 47008500 kB' 'Buffers: 4100 kB' 'Cached: 10858984 kB' 'SwapCached: 0 kB' 'Active: 7623800 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235036 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456412 kB' 'Mapped: 187676 kB' 'Shmem: 6781900 kB' 'KReclaimable: 240736 kB' 'Slab: 1013452 kB' 'SReclaimable: 240736 kB' 'SUnreclaim: 772716 kB' 'KernelStack: 22096 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8488820 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.825 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.825 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.826 15:54:57 -- setup/common.sh@33 -- # echo 0 00:04:26.826 15:54:57 -- setup/common.sh@33 -- # return 0 00:04:26.826 15:54:57 -- setup/hugepages.sh@99 -- # surp=0 00:04:26.826 15:54:57 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:26.826 15:54:57 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:26.826 15:54:57 -- setup/common.sh@18 -- # local node= 00:04:26.826 15:54:57 -- setup/common.sh@19 -- # local var val 00:04:26.826 15:54:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.826 15:54:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.826 15:54:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.826 15:54:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.826 15:54:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.826 15:54:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43304152 kB' 'MemAvailable: 47008836 kB' 'Buffers: 4100 kB' 'Cached: 10858996 kB' 'SwapCached: 0 kB' 'Active: 7623652 kB' 'Inactive: 3692420 kB' 'Active(anon): 7234888 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456264 kB' 'Mapped: 187600 kB' 'Shmem: 6781912 kB' 'KReclaimable: 240736 kB' 'Slab: 1013408 kB' 'SReclaimable: 240736 kB' 'SUnreclaim: 772672 kB' 'KernelStack: 22032 kB' 'PageTables: 7832 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8488836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.826 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.826 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.827 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.827 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:26.827 15:54:57 -- setup/common.sh@33 -- # echo 0 00:04:26.827 15:54:57 -- setup/common.sh@33 -- # return 0 00:04:26.827 15:54:57 -- setup/hugepages.sh@100 -- # resv=0 00:04:26.827 15:54:57 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:26.827 nr_hugepages=1024 00:04:26.827 15:54:57 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:26.827 resv_hugepages=0 00:04:26.827 15:54:57 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:26.827 surplus_hugepages=0 00:04:26.827 15:54:57 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:26.827 anon_hugepages=0 00:04:26.827 15:54:57 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.827 15:54:57 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:26.827 15:54:57 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:26.827 15:54:57 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:26.827 15:54:57 -- setup/common.sh@18 -- # local node= 00:04:26.827 15:54:57 -- setup/common.sh@19 -- # local var val 00:04:26.827 15:54:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.827 15:54:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.827 15:54:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:26.827 15:54:57 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:26.827 15:54:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.827 15:54:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.828 15:54:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43301616 kB' 'MemAvailable: 47006300 kB' 'Buffers: 4100 kB' 'Cached: 10859008 kB' 'SwapCached: 0 kB' 'Active: 7623912 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235148 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456536 kB' 'Mapped: 187600 kB' 'Shmem: 6781924 kB' 'KReclaimable: 240736 kB' 'Slab: 1013408 kB' 'SReclaimable: 240736 kB' 'SUnreclaim: 772672 kB' 'KernelStack: 22128 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8488584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218012 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.828 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.828 15:54:57 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:26.829 15:54:57 -- setup/common.sh@33 -- # echo 1024 00:04:26.829 15:54:57 -- setup/common.sh@33 -- # return 0 00:04:26.829 15:54:57 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:26.829 15:54:57 -- setup/hugepages.sh@112 -- # get_nodes 00:04:26.829 15:54:57 -- setup/hugepages.sh@27 -- # local node 00:04:26.829 15:54:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.829 15:54:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:26.829 15:54:57 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:26.829 15:54:57 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:26.829 15:54:57 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:26.829 15:54:57 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:26.829 15:54:57 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:26.829 15:54:57 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:26.829 15:54:57 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:26.829 15:54:57 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:26.829 15:54:57 -- setup/common.sh@18 -- # local node=0 00:04:26.829 15:54:57 -- setup/common.sh@19 -- # local var val 00:04:26.829 15:54:57 -- setup/common.sh@20 -- # local mem_f mem 00:04:26.829 15:54:57 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:26.829 15:54:57 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:26.829 15:54:57 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:26.829 15:54:57 -- setup/common.sh@28 -- # mapfile -t mem 00:04:26.829 15:54:57 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27438844 kB' 'MemUsed: 5146524 kB' 'SwapCached: 0 kB' 'Active: 1834348 kB' 'Inactive: 183816 kB' 'Active(anon): 1683868 kB' 'Inactive(anon): 0 kB' 'Active(file): 150480 kB' 'Inactive(file): 183816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1817152 kB' 'Mapped: 55640 kB' 'AnonPages: 204336 kB' 'Shmem: 1482856 kB' 'KernelStack: 13112 kB' 'PageTables: 4732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76648 kB' 'Slab: 444644 kB' 'SReclaimable: 76648 kB' 'SUnreclaim: 367996 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.829 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.829 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # continue 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # IFS=': ' 00:04:26.830 15:54:57 -- setup/common.sh@31 -- # read -r var val _ 00:04:26.830 15:54:57 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:26.830 15:54:57 -- setup/common.sh@33 -- # echo 0 00:04:26.830 15:54:57 -- setup/common.sh@33 -- # return 0 00:04:26.830 15:54:57 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:26.830 15:54:57 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:26.830 15:54:57 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:26.830 15:54:57 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:26.830 15:54:57 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:26.830 node0=1024 expecting 1024 00:04:26.830 15:54:57 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:26.830 00:04:26.830 real 0m5.860s 00:04:26.830 user 0m1.487s 00:04:26.830 sys 0m2.495s 00:04:26.830 15:54:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:26.830 15:54:57 -- common/autotest_common.sh@10 -- # set +x 00:04:26.830 ************************************ 00:04:26.830 END TEST default_setup 00:04:26.830 ************************************ 00:04:26.830 15:54:57 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:26.830 15:54:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:26.830 15:54:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:26.830 15:54:57 -- common/autotest_common.sh@10 -- # set +x 00:04:26.830 ************************************ 00:04:26.830 START TEST per_node_1G_alloc 00:04:26.830 ************************************ 00:04:26.830 15:54:57 -- common/autotest_common.sh@1114 -- # per_node_1G_alloc 00:04:26.830 15:54:57 -- setup/hugepages.sh@143 -- # local IFS=, 00:04:26.830 15:54:57 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:26.830 15:54:57 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:26.830 15:54:57 -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:26.830 15:54:57 -- setup/hugepages.sh@51 -- # shift 00:04:26.830 15:54:57 -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:26.830 15:54:57 -- setup/hugepages.sh@52 -- # local node_ids 00:04:26.830 15:54:57 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:26.830 15:54:57 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:26.830 15:54:57 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:26.830 15:54:57 -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:26.830 15:54:57 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:26.830 15:54:57 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:26.830 15:54:57 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:26.830 15:54:57 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:26.830 15:54:57 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:26.830 15:54:57 -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:26.830 15:54:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.830 15:54:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:26.830 15:54:57 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:26.830 15:54:57 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:26.830 15:54:57 -- setup/hugepages.sh@73 -- # return 0 00:04:26.830 15:54:57 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:26.830 15:54:57 -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:26.830 15:54:57 -- setup/hugepages.sh@146 -- # setup output 00:04:26.830 15:54:57 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.830 15:54:57 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:30.123 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:30.123 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:30.387 15:55:00 -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:30.387 15:55:00 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:30.387 15:55:00 -- setup/hugepages.sh@89 -- # local node 00:04:30.387 15:55:00 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:30.387 15:55:00 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:30.387 15:55:00 -- setup/hugepages.sh@92 -- # local surp 00:04:30.387 15:55:00 -- setup/hugepages.sh@93 -- # local resv 00:04:30.387 15:55:00 -- setup/hugepages.sh@94 -- # local anon 00:04:30.387 15:55:00 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:30.387 15:55:00 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:30.387 15:55:00 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:30.387 15:55:00 -- setup/common.sh@18 -- # local node= 00:04:30.387 15:55:00 -- setup/common.sh@19 -- # local var val 00:04:30.387 15:55:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.387 15:55:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.387 15:55:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.387 15:55:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.387 15:55:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.387 15:55:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.387 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.387 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43320844 kB' 'MemAvailable: 47025520 kB' 'Buffers: 4100 kB' 'Cached: 10859100 kB' 'SwapCached: 0 kB' 'Active: 7625260 kB' 'Inactive: 3692420 kB' 'Active(anon): 7236496 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457772 kB' 'Mapped: 186688 kB' 'Shmem: 6782016 kB' 'KReclaimable: 240720 kB' 'Slab: 1013272 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772552 kB' 'KernelStack: 22144 kB' 'PageTables: 8036 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8490292 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218108 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.388 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.388 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:30.389 15:55:00 -- setup/common.sh@33 -- # echo 0 00:04:30.389 15:55:00 -- setup/common.sh@33 -- # return 0 00:04:30.389 15:55:00 -- setup/hugepages.sh@97 -- # anon=0 00:04:30.389 15:55:00 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:30.389 15:55:00 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.389 15:55:00 -- setup/common.sh@18 -- # local node= 00:04:30.389 15:55:00 -- setup/common.sh@19 -- # local var val 00:04:30.389 15:55:00 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.389 15:55:00 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.389 15:55:00 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.389 15:55:00 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.389 15:55:00 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.389 15:55:00 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43321980 kB' 'MemAvailable: 47026656 kB' 'Buffers: 4100 kB' 'Cached: 10859100 kB' 'SwapCached: 0 kB' 'Active: 7624620 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235856 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 457164 kB' 'Mapped: 186564 kB' 'Shmem: 6782016 kB' 'KReclaimable: 240720 kB' 'Slab: 1013208 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772488 kB' 'KernelStack: 22096 kB' 'PageTables: 7848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8476736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.389 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.389 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.390 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.390 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:00 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:00 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.391 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.391 15:55:01 -- setup/common.sh@33 -- # echo 0 00:04:30.391 15:55:01 -- setup/common.sh@33 -- # return 0 00:04:30.391 15:55:01 -- setup/hugepages.sh@99 -- # surp=0 00:04:30.391 15:55:01 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:30.391 15:55:01 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:30.391 15:55:01 -- setup/common.sh@18 -- # local node= 00:04:30.391 15:55:01 -- setup/common.sh@19 -- # local var val 00:04:30.391 15:55:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.391 15:55:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.391 15:55:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.391 15:55:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.391 15:55:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.391 15:55:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.391 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43321980 kB' 'MemAvailable: 47026656 kB' 'Buffers: 4100 kB' 'Cached: 10859100 kB' 'SwapCached: 0 kB' 'Active: 7624284 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235520 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456776 kB' 'Mapped: 186564 kB' 'Shmem: 6782016 kB' 'KReclaimable: 240720 kB' 'Slab: 1013208 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772488 kB' 'KernelStack: 22096 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8476748 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.392 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.392 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.393 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.393 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:30.393 15:55:01 -- setup/common.sh@33 -- # echo 0 00:04:30.394 15:55:01 -- setup/common.sh@33 -- # return 0 00:04:30.394 15:55:01 -- setup/hugepages.sh@100 -- # resv=0 00:04:30.394 15:55:01 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:30.394 nr_hugepages=1024 00:04:30.394 15:55:01 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:30.394 resv_hugepages=0 00:04:30.394 15:55:01 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:30.394 surplus_hugepages=0 00:04:30.394 15:55:01 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:30.394 anon_hugepages=0 00:04:30.394 15:55:01 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.394 15:55:01 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:30.394 15:55:01 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:30.394 15:55:01 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:30.394 15:55:01 -- setup/common.sh@18 -- # local node= 00:04:30.394 15:55:01 -- setup/common.sh@19 -- # local var val 00:04:30.394 15:55:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.394 15:55:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.394 15:55:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:30.394 15:55:01 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:30.394 15:55:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.394 15:55:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43321728 kB' 'MemAvailable: 47026404 kB' 'Buffers: 4100 kB' 'Cached: 10859100 kB' 'SwapCached: 0 kB' 'Active: 7624360 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235596 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456828 kB' 'Mapped: 186564 kB' 'Shmem: 6782016 kB' 'KReclaimable: 240720 kB' 'Slab: 1013208 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772488 kB' 'KernelStack: 22080 kB' 'PageTables: 7772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8476768 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218060 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.394 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.394 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.395 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.395 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:30.396 15:55:01 -- setup/common.sh@33 -- # echo 1024 00:04:30.396 15:55:01 -- setup/common.sh@33 -- # return 0 00:04:30.396 15:55:01 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:30.396 15:55:01 -- setup/hugepages.sh@112 -- # get_nodes 00:04:30.396 15:55:01 -- setup/hugepages.sh@27 -- # local node 00:04:30.396 15:55:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.396 15:55:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:30.396 15:55:01 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:30.396 15:55:01 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:30.396 15:55:01 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:30.396 15:55:01 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:30.396 15:55:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.396 15:55:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.396 15:55:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:30.396 15:55:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.396 15:55:01 -- setup/common.sh@18 -- # local node=0 00:04:30.396 15:55:01 -- setup/common.sh@19 -- # local var val 00:04:30.396 15:55:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.396 15:55:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.396 15:55:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:30.396 15:55:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:30.396 15:55:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.396 15:55:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 28495508 kB' 'MemUsed: 4089860 kB' 'SwapCached: 0 kB' 'Active: 1834160 kB' 'Inactive: 183816 kB' 'Active(anon): 1683680 kB' 'Inactive(anon): 0 kB' 'Active(file): 150480 kB' 'Inactive(file): 183816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1817216 kB' 'Mapped: 55216 kB' 'AnonPages: 203536 kB' 'Shmem: 1482920 kB' 'KernelStack: 13016 kB' 'PageTables: 4368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76648 kB' 'Slab: 444500 kB' 'SReclaimable: 76648 kB' 'SUnreclaim: 367852 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.396 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.396 15:55:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.397 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.397 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.397 15:55:01 -- setup/common.sh@33 -- # echo 0 00:04:30.397 15:55:01 -- setup/common.sh@33 -- # return 0 00:04:30.397 15:55:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.397 15:55:01 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:30.397 15:55:01 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:30.397 15:55:01 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:30.397 15:55:01 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:30.397 15:55:01 -- setup/common.sh@18 -- # local node=1 00:04:30.398 15:55:01 -- setup/common.sh@19 -- # local var val 00:04:30.398 15:55:01 -- setup/common.sh@20 -- # local mem_f mem 00:04:30.398 15:55:01 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:30.398 15:55:01 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:30.398 15:55:01 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:30.398 15:55:01 -- setup/common.sh@28 -- # mapfile -t mem 00:04:30.398 15:55:01 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698408 kB' 'MemFree: 14828164 kB' 'MemUsed: 12870244 kB' 'SwapCached: 0 kB' 'Active: 5790348 kB' 'Inactive: 3508604 kB' 'Active(anon): 5552064 kB' 'Inactive(anon): 0 kB' 'Active(file): 238284 kB' 'Inactive(file): 3508604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9045996 kB' 'Mapped: 131852 kB' 'AnonPages: 253004 kB' 'Shmem: 5299108 kB' 'KernelStack: 8936 kB' 'PageTables: 3060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164072 kB' 'Slab: 568708 kB' 'SReclaimable: 164072 kB' 'SUnreclaim: 404636 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.398 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.398 15:55:01 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # continue 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # IFS=': ' 00:04:30.399 15:55:01 -- setup/common.sh@31 -- # read -r var val _ 00:04:30.399 15:55:01 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:30.399 15:55:01 -- setup/common.sh@33 -- # echo 0 00:04:30.399 15:55:01 -- setup/common.sh@33 -- # return 0 00:04:30.399 15:55:01 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:30.399 15:55:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.399 15:55:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.399 15:55:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.399 15:55:01 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:30.399 node0=512 expecting 512 00:04:30.399 15:55:01 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:30.399 15:55:01 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:30.399 15:55:01 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:30.399 15:55:01 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:30.399 node1=512 expecting 512 00:04:30.399 15:55:01 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:30.399 00:04:30.399 real 0m3.736s 00:04:30.399 user 0m1.440s 00:04:30.399 sys 0m2.370s 00:04:30.399 15:55:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:30.399 15:55:01 -- common/autotest_common.sh@10 -- # set +x 00:04:30.399 ************************************ 00:04:30.399 END TEST per_node_1G_alloc 00:04:30.399 ************************************ 00:04:30.399 15:55:01 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:30.399 15:55:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:30.399 15:55:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:30.399 15:55:01 -- common/autotest_common.sh@10 -- # set +x 00:04:30.399 ************************************ 00:04:30.399 START TEST even_2G_alloc 00:04:30.399 ************************************ 00:04:30.399 15:55:01 -- common/autotest_common.sh@1114 -- # even_2G_alloc 00:04:30.399 15:55:01 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:30.399 15:55:01 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:30.399 15:55:01 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:30.399 15:55:01 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:30.399 15:55:01 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:30.399 15:55:01 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:30.399 15:55:01 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:30.399 15:55:01 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:30.399 15:55:01 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:30.399 15:55:01 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:30.399 15:55:01 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:30.399 15:55:01 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:30.399 15:55:01 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:30.399 15:55:01 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:30.399 15:55:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.399 15:55:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:30.399 15:55:01 -- setup/hugepages.sh@83 -- # : 512 00:04:30.399 15:55:01 -- setup/hugepages.sh@84 -- # : 1 00:04:30.399 15:55:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.399 15:55:01 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:30.399 15:55:01 -- setup/hugepages.sh@83 -- # : 0 00:04:30.399 15:55:01 -- setup/hugepages.sh@84 -- # : 0 00:04:30.399 15:55:01 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:30.399 15:55:01 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:30.399 15:55:01 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:30.399 15:55:01 -- setup/hugepages.sh@153 -- # setup output 00:04:30.399 15:55:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:30.399 15:55:01 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:34.601 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:34.601 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:34.601 15:55:04 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:34.601 15:55:04 -- setup/hugepages.sh@89 -- # local node 00:04:34.601 15:55:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:34.601 15:55:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:34.601 15:55:04 -- setup/hugepages.sh@92 -- # local surp 00:04:34.601 15:55:04 -- setup/hugepages.sh@93 -- # local resv 00:04:34.601 15:55:04 -- setup/hugepages.sh@94 -- # local anon 00:04:34.601 15:55:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:34.601 15:55:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:34.601 15:55:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:34.601 15:55:04 -- setup/common.sh@18 -- # local node= 00:04:34.601 15:55:04 -- setup/common.sh@19 -- # local var val 00:04:34.602 15:55:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.602 15:55:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.602 15:55:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.602 15:55:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.602 15:55:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.602 15:55:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43289236 kB' 'MemAvailable: 46993912 kB' 'Buffers: 4100 kB' 'Cached: 10859236 kB' 'SwapCached: 0 kB' 'Active: 7622824 kB' 'Inactive: 3692420 kB' 'Active(anon): 7234060 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455268 kB' 'Mapped: 185736 kB' 'Shmem: 6782152 kB' 'KReclaimable: 240720 kB' 'Slab: 1013072 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772352 kB' 'KernelStack: 21984 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8443500 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.602 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.602 15:55:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:34.603 15:55:04 -- setup/common.sh@33 -- # echo 0 00:04:34.603 15:55:04 -- setup/common.sh@33 -- # return 0 00:04:34.603 15:55:04 -- setup/hugepages.sh@97 -- # anon=0 00:04:34.603 15:55:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:34.603 15:55:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.603 15:55:04 -- setup/common.sh@18 -- # local node= 00:04:34.603 15:55:04 -- setup/common.sh@19 -- # local var val 00:04:34.603 15:55:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.603 15:55:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.603 15:55:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.603 15:55:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.603 15:55:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.603 15:55:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43289836 kB' 'MemAvailable: 46994512 kB' 'Buffers: 4100 kB' 'Cached: 10859240 kB' 'SwapCached: 0 kB' 'Active: 7623028 kB' 'Inactive: 3692420 kB' 'Active(anon): 7234264 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455380 kB' 'Mapped: 185636 kB' 'Shmem: 6782156 kB' 'KReclaimable: 240720 kB' 'Slab: 1013040 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772320 kB' 'KernelStack: 21968 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8443512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217884 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.603 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.603 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.604 15:55:04 -- setup/common.sh@33 -- # echo 0 00:04:34.604 15:55:04 -- setup/common.sh@33 -- # return 0 00:04:34.604 15:55:04 -- setup/hugepages.sh@99 -- # surp=0 00:04:34.604 15:55:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:34.604 15:55:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:34.604 15:55:04 -- setup/common.sh@18 -- # local node= 00:04:34.604 15:55:04 -- setup/common.sh@19 -- # local var val 00:04:34.604 15:55:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.604 15:55:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.604 15:55:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.604 15:55:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.604 15:55:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.604 15:55:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43290420 kB' 'MemAvailable: 46995096 kB' 'Buffers: 4100 kB' 'Cached: 10859252 kB' 'SwapCached: 0 kB' 'Active: 7623316 kB' 'Inactive: 3692420 kB' 'Active(anon): 7234552 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455732 kB' 'Mapped: 185636 kB' 'Shmem: 6782168 kB' 'KReclaimable: 240720 kB' 'Slab: 1013040 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772320 kB' 'KernelStack: 21968 kB' 'PageTables: 7564 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8443528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217884 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.604 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.604 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.605 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.605 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:34.606 15:55:04 -- setup/common.sh@33 -- # echo 0 00:04:34.606 15:55:04 -- setup/common.sh@33 -- # return 0 00:04:34.606 15:55:04 -- setup/hugepages.sh@100 -- # resv=0 00:04:34.606 15:55:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:34.606 nr_hugepages=1024 00:04:34.606 15:55:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:34.606 resv_hugepages=0 00:04:34.606 15:55:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:34.606 surplus_hugepages=0 00:04:34.606 15:55:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:34.606 anon_hugepages=0 00:04:34.606 15:55:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.606 15:55:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:34.606 15:55:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:34.606 15:55:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:34.606 15:55:04 -- setup/common.sh@18 -- # local node= 00:04:34.606 15:55:04 -- setup/common.sh@19 -- # local var val 00:04:34.606 15:55:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.606 15:55:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.606 15:55:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:34.606 15:55:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:34.606 15:55:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.606 15:55:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43290168 kB' 'MemAvailable: 46994844 kB' 'Buffers: 4100 kB' 'Cached: 10859276 kB' 'SwapCached: 0 kB' 'Active: 7622656 kB' 'Inactive: 3692420 kB' 'Active(anon): 7233892 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 454968 kB' 'Mapped: 185636 kB' 'Shmem: 6782192 kB' 'KReclaimable: 240720 kB' 'Slab: 1013040 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772320 kB' 'KernelStack: 21952 kB' 'PageTables: 7516 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8443540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217884 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.606 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.606 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:34.607 15:55:04 -- setup/common.sh@33 -- # echo 1024 00:04:34.607 15:55:04 -- setup/common.sh@33 -- # return 0 00:04:34.607 15:55:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:34.607 15:55:04 -- setup/hugepages.sh@112 -- # get_nodes 00:04:34.607 15:55:04 -- setup/hugepages.sh@27 -- # local node 00:04:34.607 15:55:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.607 15:55:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.607 15:55:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:34.607 15:55:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:34.607 15:55:04 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:34.607 15:55:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:34.607 15:55:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.607 15:55:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.607 15:55:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:34.607 15:55:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.607 15:55:04 -- setup/common.sh@18 -- # local node=0 00:04:34.607 15:55:04 -- setup/common.sh@19 -- # local var val 00:04:34.607 15:55:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.607 15:55:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.607 15:55:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:34.607 15:55:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:34.607 15:55:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.607 15:55:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 28475568 kB' 'MemUsed: 4109800 kB' 'SwapCached: 0 kB' 'Active: 1833832 kB' 'Inactive: 183816 kB' 'Active(anon): 1683352 kB' 'Inactive(anon): 0 kB' 'Active(file): 150480 kB' 'Inactive(file): 183816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1817352 kB' 'Mapped: 55160 kB' 'AnonPages: 203524 kB' 'Shmem: 1483056 kB' 'KernelStack: 13032 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76648 kB' 'Slab: 444384 kB' 'SReclaimable: 76648 kB' 'SUnreclaim: 367736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.607 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.607 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@33 -- # echo 0 00:04:34.608 15:55:04 -- setup/common.sh@33 -- # return 0 00:04:34.608 15:55:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.608 15:55:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:34.608 15:55:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:34.608 15:55:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:34.608 15:55:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:34.608 15:55:04 -- setup/common.sh@18 -- # local node=1 00:04:34.608 15:55:04 -- setup/common.sh@19 -- # local var val 00:04:34.608 15:55:04 -- setup/common.sh@20 -- # local mem_f mem 00:04:34.608 15:55:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:34.608 15:55:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:34.608 15:55:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:34.608 15:55:04 -- setup/common.sh@28 -- # mapfile -t mem 00:04:34.608 15:55:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:34.608 15:55:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698408 kB' 'MemFree: 14817376 kB' 'MemUsed: 12881032 kB' 'SwapCached: 0 kB' 'Active: 5789564 kB' 'Inactive: 3508604 kB' 'Active(anon): 5551280 kB' 'Inactive(anon): 0 kB' 'Active(file): 238284 kB' 'Inactive(file): 3508604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9046028 kB' 'Mapped: 130476 kB' 'AnonPages: 252236 kB' 'Shmem: 5299140 kB' 'KernelStack: 8920 kB' 'PageTables: 3116 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164072 kB' 'Slab: 568648 kB' 'SReclaimable: 164072 kB' 'SUnreclaim: 404576 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.608 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.608 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # continue 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # IFS=': ' 00:04:34.609 15:55:04 -- setup/common.sh@31 -- # read -r var val _ 00:04:34.609 15:55:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:34.609 15:55:04 -- setup/common.sh@33 -- # echo 0 00:04:34.609 15:55:04 -- setup/common.sh@33 -- # return 0 00:04:34.609 15:55:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:34.609 15:55:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.609 15:55:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.609 15:55:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.609 15:55:04 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:34.609 node0=512 expecting 512 00:04:34.609 15:55:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:34.609 15:55:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:34.609 15:55:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:34.609 15:55:04 -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:34.609 node1=512 expecting 512 00:04:34.609 15:55:04 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:34.609 00:04:34.609 real 0m3.792s 00:04:34.609 user 0m1.456s 00:04:34.609 sys 0m2.409s 00:04:34.609 15:55:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:34.609 15:55:04 -- common/autotest_common.sh@10 -- # set +x 00:04:34.609 ************************************ 00:04:34.609 END TEST even_2G_alloc 00:04:34.609 ************************************ 00:04:34.609 15:55:04 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:34.609 15:55:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:34.609 15:55:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:34.609 15:55:04 -- common/autotest_common.sh@10 -- # set +x 00:04:34.609 ************************************ 00:04:34.609 START TEST odd_alloc 00:04:34.609 ************************************ 00:04:34.609 15:55:04 -- common/autotest_common.sh@1114 -- # odd_alloc 00:04:34.609 15:55:04 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:34.609 15:55:04 -- setup/hugepages.sh@49 -- # local size=2098176 00:04:34.609 15:55:04 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:34.609 15:55:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.609 15:55:04 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:34.609 15:55:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:34.609 15:55:04 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:34.609 15:55:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.610 15:55:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:34.610 15:55:04 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.610 15:55:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.610 15:55:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.610 15:55:04 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:34.610 15:55:04 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:34.610 15:55:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.610 15:55:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:34.610 15:55:04 -- setup/hugepages.sh@83 -- # : 513 00:04:34.610 15:55:04 -- setup/hugepages.sh@84 -- # : 1 00:04:34.610 15:55:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.610 15:55:04 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:34.610 15:55:04 -- setup/hugepages.sh@83 -- # : 0 00:04:34.610 15:55:04 -- setup/hugepages.sh@84 -- # : 0 00:04:34.610 15:55:04 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:34.610 15:55:04 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:34.610 15:55:04 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:34.610 15:55:04 -- setup/hugepages.sh@160 -- # setup output 00:04:34.610 15:55:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.610 15:55:04 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:37.904 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:37.904 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.904 15:55:08 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:37.904 15:55:08 -- setup/hugepages.sh@89 -- # local node 00:04:37.904 15:55:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.904 15:55:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.904 15:55:08 -- setup/hugepages.sh@92 -- # local surp 00:04:37.904 15:55:08 -- setup/hugepages.sh@93 -- # local resv 00:04:37.904 15:55:08 -- setup/hugepages.sh@94 -- # local anon 00:04:37.904 15:55:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.904 15:55:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.904 15:55:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.904 15:55:08 -- setup/common.sh@18 -- # local node= 00:04:37.904 15:55:08 -- setup/common.sh@19 -- # local var val 00:04:37.904 15:55:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.904 15:55:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.904 15:55:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.905 15:55:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.905 15:55:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.905 15:55:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43277244 kB' 'MemAvailable: 46981920 kB' 'Buffers: 4100 kB' 'Cached: 10859372 kB' 'SwapCached: 0 kB' 'Active: 7625220 kB' 'Inactive: 3692420 kB' 'Active(anon): 7236456 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456884 kB' 'Mapped: 185728 kB' 'Shmem: 6782288 kB' 'KReclaimable: 240720 kB' 'Slab: 1013252 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772532 kB' 'KernelStack: 21952 kB' 'PageTables: 7580 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 8444156 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218044 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.905 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.905 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.906 15:55:08 -- setup/common.sh@33 -- # echo 0 00:04:37.906 15:55:08 -- setup/common.sh@33 -- # return 0 00:04:37.906 15:55:08 -- setup/hugepages.sh@97 -- # anon=0 00:04:37.906 15:55:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.906 15:55:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.906 15:55:08 -- setup/common.sh@18 -- # local node= 00:04:37.906 15:55:08 -- setup/common.sh@19 -- # local var val 00:04:37.906 15:55:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.906 15:55:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.906 15:55:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.906 15:55:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.906 15:55:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.906 15:55:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43280076 kB' 'MemAvailable: 46984752 kB' 'Buffers: 4100 kB' 'Cached: 10859376 kB' 'SwapCached: 0 kB' 'Active: 7624564 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235800 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456764 kB' 'Mapped: 185644 kB' 'Shmem: 6782292 kB' 'KReclaimable: 240720 kB' 'Slab: 1013268 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772548 kB' 'KernelStack: 21968 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 8446572 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217996 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.906 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.906 15:55:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.907 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.907 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.908 15:55:08 -- setup/common.sh@33 -- # echo 0 00:04:37.908 15:55:08 -- setup/common.sh@33 -- # return 0 00:04:37.908 15:55:08 -- setup/hugepages.sh@99 -- # surp=0 00:04:37.908 15:55:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.908 15:55:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.908 15:55:08 -- setup/common.sh@18 -- # local node= 00:04:37.908 15:55:08 -- setup/common.sh@19 -- # local var val 00:04:37.908 15:55:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.908 15:55:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.908 15:55:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.908 15:55:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.908 15:55:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.908 15:55:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43280676 kB' 'MemAvailable: 46985352 kB' 'Buffers: 4100 kB' 'Cached: 10859388 kB' 'SwapCached: 0 kB' 'Active: 7624544 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235780 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456748 kB' 'Mapped: 185696 kB' 'Shmem: 6782304 kB' 'KReclaimable: 240720 kB' 'Slab: 1013260 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772540 kB' 'KernelStack: 21968 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 8444184 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217964 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.908 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.908 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.909 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.909 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.909 15:55:08 -- setup/common.sh@33 -- # echo 0 00:04:37.909 15:55:08 -- setup/common.sh@33 -- # return 0 00:04:37.909 15:55:08 -- setup/hugepages.sh@100 -- # resv=0 00:04:37.909 15:55:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:37.909 nr_hugepages=1025 00:04:37.909 15:55:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.909 resv_hugepages=0 00:04:37.910 15:55:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.910 surplus_hugepages=0 00:04:37.910 15:55:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.910 anon_hugepages=0 00:04:37.910 15:55:08 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:37.910 15:55:08 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:37.910 15:55:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.910 15:55:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.910 15:55:08 -- setup/common.sh@18 -- # local node= 00:04:37.910 15:55:08 -- setup/common.sh@19 -- # local var val 00:04:37.910 15:55:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.910 15:55:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.910 15:55:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.910 15:55:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.910 15:55:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.910 15:55:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43281116 kB' 'MemAvailable: 46985792 kB' 'Buffers: 4100 kB' 'Cached: 10859388 kB' 'SwapCached: 0 kB' 'Active: 7624176 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235412 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456320 kB' 'Mapped: 185644 kB' 'Shmem: 6782304 kB' 'KReclaimable: 240720 kB' 'Slab: 1013260 kB' 'SReclaimable: 240720 kB' 'SUnreclaim: 772540 kB' 'KernelStack: 21952 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37480892 kB' 'Committed_AS: 8444196 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.910 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.910 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.911 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.911 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.911 15:55:08 -- setup/common.sh@33 -- # echo 1025 00:04:37.911 15:55:08 -- setup/common.sh@33 -- # return 0 00:04:37.911 15:55:08 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:37.911 15:55:08 -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.911 15:55:08 -- setup/hugepages.sh@27 -- # local node 00:04:37.912 15:55:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.912 15:55:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.912 15:55:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.912 15:55:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:37.912 15:55:08 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.912 15:55:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.912 15:55:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.912 15:55:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.912 15:55:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.912 15:55:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.912 15:55:08 -- setup/common.sh@18 -- # local node=0 00:04:37.912 15:55:08 -- setup/common.sh@19 -- # local var val 00:04:37.912 15:55:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:37.912 15:55:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.912 15:55:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.912 15:55:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.912 15:55:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.912 15:55:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.912 15:55:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 28472492 kB' 'MemUsed: 4112876 kB' 'SwapCached: 0 kB' 'Active: 1834664 kB' 'Inactive: 183816 kB' 'Active(anon): 1684184 kB' 'Inactive(anon): 0 kB' 'Active(file): 150480 kB' 'Inactive(file): 183816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1817480 kB' 'Mapped: 55160 kB' 'AnonPages: 204188 kB' 'Shmem: 1483184 kB' 'KernelStack: 13032 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76648 kB' 'Slab: 444384 kB' 'SReclaimable: 76648 kB' 'SUnreclaim: 367736 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.912 15:55:08 -- setup/common.sh@32 -- # continue 00:04:37.912 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.173 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.173 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@33 -- # echo 0 00:04:38.174 15:55:08 -- setup/common.sh@33 -- # return 0 00:04:38.174 15:55:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.174 15:55:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:38.174 15:55:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:38.174 15:55:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:38.174 15:55:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.174 15:55:08 -- setup/common.sh@18 -- # local node=1 00:04:38.174 15:55:08 -- setup/common.sh@19 -- # local var val 00:04:38.174 15:55:08 -- setup/common.sh@20 -- # local mem_f mem 00:04:38.174 15:55:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.174 15:55:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:38.174 15:55:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:38.174 15:55:08 -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.174 15:55:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698408 kB' 'MemFree: 14809096 kB' 'MemUsed: 12889312 kB' 'SwapCached: 0 kB' 'Active: 5789820 kB' 'Inactive: 3508604 kB' 'Active(anon): 5551536 kB' 'Inactive(anon): 0 kB' 'Active(file): 238284 kB' 'Inactive(file): 3508604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9046036 kB' 'Mapped: 130484 kB' 'AnonPages: 252468 kB' 'Shmem: 5299148 kB' 'KernelStack: 8920 kB' 'PageTables: 3152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164072 kB' 'Slab: 568876 kB' 'SReclaimable: 164072 kB' 'SUnreclaim: 404804 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.174 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.174 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # continue 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # IFS=': ' 00:04:38.175 15:55:08 -- setup/common.sh@31 -- # read -r var val _ 00:04:38.175 15:55:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.175 15:55:08 -- setup/common.sh@33 -- # echo 0 00:04:38.175 15:55:08 -- setup/common.sh@33 -- # return 0 00:04:38.175 15:55:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.175 15:55:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.175 15:55:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.175 15:55:08 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:38.175 node0=512 expecting 513 00:04:38.175 15:55:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:38.175 15:55:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:38.175 15:55:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:38.175 15:55:08 -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:38.175 node1=513 expecting 512 00:04:38.175 15:55:08 -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:38.175 00:04:38.175 real 0m3.765s 00:04:38.175 user 0m1.416s 00:04:38.175 sys 0m2.421s 00:04:38.175 15:55:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:38.175 15:55:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.175 ************************************ 00:04:38.175 END TEST odd_alloc 00:04:38.175 ************************************ 00:04:38.175 15:55:08 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:38.175 15:55:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:38.175 15:55:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:38.175 15:55:08 -- common/autotest_common.sh@10 -- # set +x 00:04:38.175 ************************************ 00:04:38.175 START TEST custom_alloc 00:04:38.175 ************************************ 00:04:38.175 15:55:08 -- common/autotest_common.sh@1114 -- # custom_alloc 00:04:38.175 15:55:08 -- setup/hugepages.sh@167 -- # local IFS=, 00:04:38.175 15:55:08 -- setup/hugepages.sh@169 -- # local node 00:04:38.175 15:55:08 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:38.175 15:55:08 -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:38.175 15:55:08 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:38.175 15:55:08 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:38.175 15:55:08 -- setup/hugepages.sh@49 -- # local size=1048576 00:04:38.175 15:55:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:38.175 15:55:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:38.175 15:55:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.175 15:55:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.175 15:55:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:38.175 15:55:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.175 15:55:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.175 15:55:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.175 15:55:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:38.175 15:55:08 -- setup/hugepages.sh@83 -- # : 256 00:04:38.175 15:55:08 -- setup/hugepages.sh@84 -- # : 1 00:04:38.175 15:55:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:38.175 15:55:08 -- setup/hugepages.sh@83 -- # : 0 00:04:38.175 15:55:08 -- setup/hugepages.sh@84 -- # : 0 00:04:38.175 15:55:08 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:38.175 15:55:08 -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:38.175 15:55:08 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:38.175 15:55:08 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:38.175 15:55:08 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:38.175 15:55:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.175 15:55:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.175 15:55:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.175 15:55:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.175 15:55:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.175 15:55:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.175 15:55:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.175 15:55:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:38.175 15:55:08 -- setup/hugepages.sh@78 -- # return 0 00:04:38.175 15:55:08 -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:38.175 15:55:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:38.175 15:55:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:38.175 15:55:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:38.175 15:55:08 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:38.175 15:55:08 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:38.175 15:55:08 -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:38.175 15:55:08 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:38.175 15:55:08 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:38.175 15:55:08 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:38.175 15:55:08 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:38.175 15:55:08 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:38.175 15:55:08 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:38.175 15:55:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.175 15:55:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:38.175 15:55:08 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:38.175 15:55:08 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:38.175 15:55:08 -- setup/hugepages.sh@78 -- # return 0 00:04:38.175 15:55:08 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:38.175 15:55:08 -- setup/hugepages.sh@187 -- # setup output 00:04:38.175 15:55:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:38.175 15:55:08 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:41.595 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:41.595 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:41.595 15:55:12 -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:41.595 15:55:12 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:41.595 15:55:12 -- setup/hugepages.sh@89 -- # local node 00:04:41.595 15:55:12 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:41.595 15:55:12 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:41.595 15:55:12 -- setup/hugepages.sh@92 -- # local surp 00:04:41.595 15:55:12 -- setup/hugepages.sh@93 -- # local resv 00:04:41.595 15:55:12 -- setup/hugepages.sh@94 -- # local anon 00:04:41.595 15:55:12 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:41.595 15:55:12 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:41.595 15:55:12 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:41.595 15:55:12 -- setup/common.sh@18 -- # local node= 00:04:41.595 15:55:12 -- setup/common.sh@19 -- # local var val 00:04:41.595 15:55:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.595 15:55:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.595 15:55:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.595 15:55:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.595 15:55:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.595 15:55:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.595 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.595 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.596 15:55:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42250420 kB' 'MemAvailable: 45955080 kB' 'Buffers: 4100 kB' 'Cached: 10859504 kB' 'SwapCached: 0 kB' 'Active: 7624080 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235316 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 456184 kB' 'Mapped: 185688 kB' 'Shmem: 6782420 kB' 'KReclaimable: 240688 kB' 'Slab: 1013964 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773276 kB' 'KernelStack: 21968 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 8444812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217980 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:41.596 15:55:12 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.596 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.859 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.859 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.860 15:55:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:41.860 15:55:12 -- setup/common.sh@33 -- # echo 0 00:04:41.860 15:55:12 -- setup/common.sh@33 -- # return 0 00:04:41.860 15:55:12 -- setup/hugepages.sh@97 -- # anon=0 00:04:41.860 15:55:12 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:41.860 15:55:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.860 15:55:12 -- setup/common.sh@18 -- # local node= 00:04:41.860 15:55:12 -- setup/common.sh@19 -- # local var val 00:04:41.860 15:55:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.860 15:55:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.860 15:55:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.860 15:55:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.860 15:55:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.860 15:55:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.860 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42251020 kB' 'MemAvailable: 45955680 kB' 'Buffers: 4100 kB' 'Cached: 10859508 kB' 'SwapCached: 0 kB' 'Active: 7623796 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235032 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455912 kB' 'Mapped: 185648 kB' 'Shmem: 6782424 kB' 'KReclaimable: 240688 kB' 'Slab: 1014032 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773344 kB' 'KernelStack: 21968 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 8444824 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.861 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.861 15:55:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.862 15:55:12 -- setup/common.sh@33 -- # echo 0 00:04:41.862 15:55:12 -- setup/common.sh@33 -- # return 0 00:04:41.862 15:55:12 -- setup/hugepages.sh@99 -- # surp=0 00:04:41.862 15:55:12 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:41.862 15:55:12 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:41.862 15:55:12 -- setup/common.sh@18 -- # local node= 00:04:41.862 15:55:12 -- setup/common.sh@19 -- # local var val 00:04:41.862 15:55:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.862 15:55:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.862 15:55:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.862 15:55:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.862 15:55:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.862 15:55:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42251524 kB' 'MemAvailable: 45956184 kB' 'Buffers: 4100 kB' 'Cached: 10859508 kB' 'SwapCached: 0 kB' 'Active: 7623796 kB' 'Inactive: 3692420 kB' 'Active(anon): 7235032 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455912 kB' 'Mapped: 185648 kB' 'Shmem: 6782424 kB' 'KReclaimable: 240688 kB' 'Slab: 1014032 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773344 kB' 'KernelStack: 21968 kB' 'PageTables: 7568 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 8444840 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.862 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.862 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.863 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.863 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:41.863 15:55:12 -- setup/common.sh@33 -- # echo 0 00:04:41.863 15:55:12 -- setup/common.sh@33 -- # return 0 00:04:41.863 15:55:12 -- setup/hugepages.sh@100 -- # resv=0 00:04:41.863 15:55:12 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:41.863 nr_hugepages=1536 00:04:41.863 15:55:12 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:41.863 resv_hugepages=0 00:04:41.863 15:55:12 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:41.863 surplus_hugepages=0 00:04:41.863 15:55:12 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:41.863 anon_hugepages=0 00:04:41.863 15:55:12 -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:41.863 15:55:12 -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:41.863 15:55:12 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:41.863 15:55:12 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:41.863 15:55:12 -- setup/common.sh@18 -- # local node= 00:04:41.863 15:55:12 -- setup/common.sh@19 -- # local var val 00:04:41.864 15:55:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.864 15:55:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.864 15:55:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.864 15:55:12 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.864 15:55:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.864 15:55:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 42251476 kB' 'MemAvailable: 45956136 kB' 'Buffers: 4100 kB' 'Cached: 10859544 kB' 'SwapCached: 0 kB' 'Active: 7623360 kB' 'Inactive: 3692420 kB' 'Active(anon): 7234596 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 455428 kB' 'Mapped: 185648 kB' 'Shmem: 6782460 kB' 'KReclaimable: 240688 kB' 'Slab: 1014032 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773344 kB' 'KernelStack: 21920 kB' 'PageTables: 7424 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36957628 kB' 'Committed_AS: 8444852 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.864 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.864 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:41.865 15:55:12 -- setup/common.sh@33 -- # echo 1536 00:04:41.865 15:55:12 -- setup/common.sh@33 -- # return 0 00:04:41.865 15:55:12 -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:41.865 15:55:12 -- setup/hugepages.sh@112 -- # get_nodes 00:04:41.865 15:55:12 -- setup/hugepages.sh@27 -- # local node 00:04:41.865 15:55:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.865 15:55:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:41.865 15:55:12 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.865 15:55:12 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:41.865 15:55:12 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.865 15:55:12 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.865 15:55:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.865 15:55:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.865 15:55:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.865 15:55:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.865 15:55:12 -- setup/common.sh@18 -- # local node=0 00:04:41.865 15:55:12 -- setup/common.sh@19 -- # local var val 00:04:41.865 15:55:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.865 15:55:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.865 15:55:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.865 15:55:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.865 15:55:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.865 15:55:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 28478816 kB' 'MemUsed: 4106552 kB' 'SwapCached: 0 kB' 'Active: 1834644 kB' 'Inactive: 183816 kB' 'Active(anon): 1684164 kB' 'Inactive(anon): 0 kB' 'Active(file): 150480 kB' 'Inactive(file): 183816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1817592 kB' 'Mapped: 55160 kB' 'AnonPages: 204020 kB' 'Shmem: 1483296 kB' 'KernelStack: 13016 kB' 'PageTables: 4312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76648 kB' 'Slab: 444828 kB' 'SReclaimable: 76648 kB' 'SUnreclaim: 368180 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.865 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.865 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@33 -- # echo 0 00:04:41.866 15:55:12 -- setup/common.sh@33 -- # return 0 00:04:41.866 15:55:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.866 15:55:12 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.866 15:55:12 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.866 15:55:12 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.866 15:55:12 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.866 15:55:12 -- setup/common.sh@18 -- # local node=1 00:04:41.866 15:55:12 -- setup/common.sh@19 -- # local var val 00:04:41.866 15:55:12 -- setup/common.sh@20 -- # local mem_f mem 00:04:41.866 15:55:12 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.866 15:55:12 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.866 15:55:12 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.866 15:55:12 -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.866 15:55:12 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27698408 kB' 'MemFree: 13771180 kB' 'MemUsed: 13927228 kB' 'SwapCached: 0 kB' 'Active: 5789284 kB' 'Inactive: 3508604 kB' 'Active(anon): 5551000 kB' 'Inactive(anon): 0 kB' 'Active(file): 238284 kB' 'Inactive(file): 3508604 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9046056 kB' 'Mapped: 130488 kB' 'AnonPages: 252008 kB' 'Shmem: 5299168 kB' 'KernelStack: 8952 kB' 'PageTables: 3288 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 164040 kB' 'Slab: 569204 kB' 'SReclaimable: 164040 kB' 'SUnreclaim: 405164 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.866 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.866 15:55:12 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # continue 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # IFS=': ' 00:04:41.867 15:55:12 -- setup/common.sh@31 -- # read -r var val _ 00:04:41.867 15:55:12 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.867 15:55:12 -- setup/common.sh@33 -- # echo 0 00:04:41.867 15:55:12 -- setup/common.sh@33 -- # return 0 00:04:41.867 15:55:12 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.867 15:55:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.867 15:55:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.867 15:55:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.867 15:55:12 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.867 node0=512 expecting 512 00:04:41.867 15:55:12 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.867 15:55:12 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.867 15:55:12 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.867 15:55:12 -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:41.867 node1=1024 expecting 1024 00:04:41.867 15:55:12 -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:41.867 00:04:41.867 real 0m3.783s 00:04:41.867 user 0m1.397s 00:04:41.867 sys 0m2.458s 00:04:41.867 15:55:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:41.867 15:55:12 -- common/autotest_common.sh@10 -- # set +x 00:04:41.867 ************************************ 00:04:41.867 END TEST custom_alloc 00:04:41.867 ************************************ 00:04:41.867 15:55:12 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:41.867 15:55:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:41.867 15:55:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:41.867 15:55:12 -- common/autotest_common.sh@10 -- # set +x 00:04:41.867 ************************************ 00:04:41.867 START TEST no_shrink_alloc 00:04:41.867 ************************************ 00:04:41.867 15:55:12 -- common/autotest_common.sh@1114 -- # no_shrink_alloc 00:04:41.867 15:55:12 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:41.867 15:55:12 -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.867 15:55:12 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:41.867 15:55:12 -- setup/hugepages.sh@51 -- # shift 00:04:41.867 15:55:12 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:41.867 15:55:12 -- setup/hugepages.sh@52 -- # local node_ids 00:04:41.867 15:55:12 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.868 15:55:12 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.868 15:55:12 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:41.868 15:55:12 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:41.868 15:55:12 -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.868 15:55:12 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.868 15:55:12 -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.868 15:55:12 -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.868 15:55:12 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.868 15:55:12 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:41.868 15:55:12 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.868 15:55:12 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:41.868 15:55:12 -- setup/hugepages.sh@73 -- # return 0 00:04:41.868 15:55:12 -- setup/hugepages.sh@198 -- # setup output 00:04:41.868 15:55:12 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.868 15:55:12 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:46.066 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:46.066 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.066 15:55:16 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:46.066 15:55:16 -- setup/hugepages.sh@89 -- # local node 00:04:46.066 15:55:16 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.066 15:55:16 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.066 15:55:16 -- setup/hugepages.sh@92 -- # local surp 00:04:46.066 15:55:16 -- setup/hugepages.sh@93 -- # local resv 00:04:46.066 15:55:16 -- setup/hugepages.sh@94 -- # local anon 00:04:46.066 15:55:16 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.066 15:55:16 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.067 15:55:16 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.067 15:55:16 -- setup/common.sh@18 -- # local node= 00:04:46.067 15:55:16 -- setup/common.sh@19 -- # local var val 00:04:46.067 15:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.067 15:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.067 15:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.067 15:55:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.067 15:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.067 15:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43288280 kB' 'MemAvailable: 46992940 kB' 'Buffers: 4100 kB' 'Cached: 10859648 kB' 'SwapCached: 0 kB' 'Active: 7627836 kB' 'Inactive: 3692420 kB' 'Active(anon): 7239072 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459356 kB' 'Mapped: 186156 kB' 'Shmem: 6782564 kB' 'KReclaimable: 240688 kB' 'Slab: 1013720 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773032 kB' 'KernelStack: 22144 kB' 'PageTables: 8128 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8452056 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218188 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.067 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.067 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.068 15:55:16 -- setup/common.sh@33 -- # echo 0 00:04:46.068 15:55:16 -- setup/common.sh@33 -- # return 0 00:04:46.068 15:55:16 -- setup/hugepages.sh@97 -- # anon=0 00:04:46.068 15:55:16 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.068 15:55:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.068 15:55:16 -- setup/common.sh@18 -- # local node= 00:04:46.068 15:55:16 -- setup/common.sh@19 -- # local var val 00:04:46.068 15:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.068 15:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.068 15:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.068 15:55:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.068 15:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.068 15:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43289756 kB' 'MemAvailable: 46994416 kB' 'Buffers: 4100 kB' 'Cached: 10859648 kB' 'SwapCached: 0 kB' 'Active: 7632264 kB' 'Inactive: 3692420 kB' 'Active(anon): 7243500 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 464736 kB' 'Mapped: 186160 kB' 'Shmem: 6782564 kB' 'KReclaimable: 240688 kB' 'Slab: 1013764 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773076 kB' 'KernelStack: 22176 kB' 'PageTables: 8444 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8456436 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218096 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.068 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.068 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.069 15:55:16 -- setup/common.sh@33 -- # echo 0 00:04:46.069 15:55:16 -- setup/common.sh@33 -- # return 0 00:04:46.069 15:55:16 -- setup/hugepages.sh@99 -- # surp=0 00:04:46.069 15:55:16 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.069 15:55:16 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.069 15:55:16 -- setup/common.sh@18 -- # local node= 00:04:46.069 15:55:16 -- setup/common.sh@19 -- # local var val 00:04:46.069 15:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.069 15:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.069 15:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.069 15:55:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.069 15:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.069 15:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43291404 kB' 'MemAvailable: 46996064 kB' 'Buffers: 4100 kB' 'Cached: 10859660 kB' 'SwapCached: 0 kB' 'Active: 7626188 kB' 'Inactive: 3692420 kB' 'Active(anon): 7237424 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458172 kB' 'Mapped: 186004 kB' 'Shmem: 6782576 kB' 'KReclaimable: 240688 kB' 'Slab: 1013844 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773156 kB' 'KernelStack: 22144 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8451220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218124 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.069 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.069 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.070 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.070 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.071 15:55:16 -- setup/common.sh@33 -- # echo 0 00:04:46.071 15:55:16 -- setup/common.sh@33 -- # return 0 00:04:46.071 15:55:16 -- setup/hugepages.sh@100 -- # resv=0 00:04:46.071 15:55:16 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.071 nr_hugepages=1024 00:04:46.071 15:55:16 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.071 resv_hugepages=0 00:04:46.071 15:55:16 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.071 surplus_hugepages=0 00:04:46.071 15:55:16 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.071 anon_hugepages=0 00:04:46.071 15:55:16 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.071 15:55:16 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.071 15:55:16 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.071 15:55:16 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.071 15:55:16 -- setup/common.sh@18 -- # local node= 00:04:46.071 15:55:16 -- setup/common.sh@19 -- # local var val 00:04:46.071 15:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.071 15:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.071 15:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.071 15:55:16 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.071 15:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.071 15:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.071 15:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43290656 kB' 'MemAvailable: 46995316 kB' 'Buffers: 4100 kB' 'Cached: 10859676 kB' 'SwapCached: 0 kB' 'Active: 7631356 kB' 'Inactive: 3692420 kB' 'Active(anon): 7242592 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 463320 kB' 'Mapped: 186004 kB' 'Shmem: 6782592 kB' 'KReclaimable: 240688 kB' 'Slab: 1013844 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773156 kB' 'KernelStack: 22080 kB' 'PageTables: 7804 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8456468 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 218092 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.071 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.071 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.072 15:55:16 -- setup/common.sh@33 -- # echo 1024 00:04:46.072 15:55:16 -- setup/common.sh@33 -- # return 0 00:04:46.072 15:55:16 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.072 15:55:16 -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.072 15:55:16 -- setup/hugepages.sh@27 -- # local node 00:04:46.072 15:55:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.072 15:55:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:46.072 15:55:16 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.072 15:55:16 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:46.072 15:55:16 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.072 15:55:16 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.072 15:55:16 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.072 15:55:16 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.072 15:55:16 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.072 15:55:16 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.072 15:55:16 -- setup/common.sh@18 -- # local node=0 00:04:46.072 15:55:16 -- setup/common.sh@19 -- # local var val 00:04:46.072 15:55:16 -- setup/common.sh@20 -- # local mem_f mem 00:04:46.072 15:55:16 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.072 15:55:16 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.072 15:55:16 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.072 15:55:16 -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.072 15:55:16 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.072 15:55:16 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27429792 kB' 'MemUsed: 5155576 kB' 'SwapCached: 0 kB' 'Active: 1835060 kB' 'Inactive: 183816 kB' 'Active(anon): 1684580 kB' 'Inactive(anon): 0 kB' 'Active(file): 150480 kB' 'Inactive(file): 183816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1817676 kB' 'Mapped: 55160 kB' 'AnonPages: 204352 kB' 'Shmem: 1483380 kB' 'KernelStack: 13032 kB' 'PageTables: 4372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76648 kB' 'Slab: 444740 kB' 'SReclaimable: 76648 kB' 'SUnreclaim: 368092 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.072 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.072 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # continue 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # IFS=': ' 00:04:46.073 15:55:16 -- setup/common.sh@31 -- # read -r var val _ 00:04:46.073 15:55:16 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.073 15:55:16 -- setup/common.sh@33 -- # echo 0 00:04:46.073 15:55:16 -- setup/common.sh@33 -- # return 0 00:04:46.073 15:55:16 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.073 15:55:16 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.073 15:55:16 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.073 15:55:16 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.073 15:55:16 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:46.073 node0=1024 expecting 1024 00:04:46.073 15:55:16 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:46.073 15:55:16 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:46.073 15:55:16 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:46.073 15:55:16 -- setup/hugepages.sh@202 -- # setup output 00:04:46.073 15:55:16 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.073 15:55:16 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:04:49.371 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:04:49.371 0000:d8:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:49.371 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:49.371 15:55:19 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:49.371 15:55:19 -- setup/hugepages.sh@89 -- # local node 00:04:49.371 15:55:19 -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.371 15:55:19 -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.371 15:55:19 -- setup/hugepages.sh@92 -- # local surp 00:04:49.371 15:55:19 -- setup/hugepages.sh@93 -- # local resv 00:04:49.371 15:55:19 -- setup/hugepages.sh@94 -- # local anon 00:04:49.371 15:55:19 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.371 15:55:19 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.371 15:55:19 -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.371 15:55:19 -- setup/common.sh@18 -- # local node= 00:04:49.371 15:55:19 -- setup/common.sh@19 -- # local var val 00:04:49.371 15:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.371 15:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.371 15:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.371 15:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.371 15:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.371 15:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43284616 kB' 'MemAvailable: 46989276 kB' 'Buffers: 4100 kB' 'Cached: 10859760 kB' 'SwapCached: 0 kB' 'Active: 7626764 kB' 'Inactive: 3692420 kB' 'Active(anon): 7238000 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458560 kB' 'Mapped: 185692 kB' 'Shmem: 6782676 kB' 'KReclaimable: 240688 kB' 'Slab: 1013876 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773188 kB' 'KernelStack: 21984 kB' 'PageTables: 7624 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8446380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217948 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.371 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.371 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.372 15:55:19 -- setup/common.sh@33 -- # echo 0 00:04:49.372 15:55:19 -- setup/common.sh@33 -- # return 0 00:04:49.372 15:55:19 -- setup/hugepages.sh@97 -- # anon=0 00:04:49.372 15:55:19 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.372 15:55:19 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.372 15:55:19 -- setup/common.sh@18 -- # local node= 00:04:49.372 15:55:19 -- setup/common.sh@19 -- # local var val 00:04:49.372 15:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.372 15:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.372 15:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.372 15:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.372 15:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.372 15:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43285644 kB' 'MemAvailable: 46990304 kB' 'Buffers: 4100 kB' 'Cached: 10859764 kB' 'SwapCached: 0 kB' 'Active: 7626580 kB' 'Inactive: 3692420 kB' 'Active(anon): 7237816 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458444 kB' 'Mapped: 185660 kB' 'Shmem: 6782680 kB' 'KReclaimable: 240688 kB' 'Slab: 1013884 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773196 kB' 'KernelStack: 22016 kB' 'PageTables: 7712 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8449108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217916 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.372 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.372 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.373 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.373 15:55:19 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.373 15:55:19 -- setup/common.sh@33 -- # echo 0 00:04:49.373 15:55:19 -- setup/common.sh@33 -- # return 0 00:04:49.373 15:55:19 -- setup/hugepages.sh@99 -- # surp=0 00:04:49.373 15:55:19 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.373 15:55:19 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.373 15:55:19 -- setup/common.sh@18 -- # local node= 00:04:49.373 15:55:19 -- setup/common.sh@19 -- # local var val 00:04:49.373 15:55:19 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.373 15:55:19 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.373 15:55:19 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.374 15:55:19 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.374 15:55:19 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.374 15:55:19 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:19 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43288400 kB' 'MemAvailable: 46993060 kB' 'Buffers: 4100 kB' 'Cached: 10859780 kB' 'SwapCached: 0 kB' 'Active: 7626276 kB' 'Inactive: 3692420 kB' 'Active(anon): 7237512 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 458064 kB' 'Mapped: 185660 kB' 'Shmem: 6782696 kB' 'KReclaimable: 240688 kB' 'Slab: 1013892 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773204 kB' 'KernelStack: 21968 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8446408 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:19 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:19 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.374 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.374 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.375 15:55:20 -- setup/common.sh@33 -- # echo 0 00:04:49.375 15:55:20 -- setup/common.sh@33 -- # return 0 00:04:49.375 15:55:20 -- setup/hugepages.sh@100 -- # resv=0 00:04:49.375 15:55:20 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:49.375 nr_hugepages=1024 00:04:49.375 15:55:20 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.375 resv_hugepages=0 00:04:49.375 15:55:20 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.375 surplus_hugepages=0 00:04:49.375 15:55:20 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.375 anon_hugepages=0 00:04:49.375 15:55:20 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.375 15:55:20 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:49.375 15:55:20 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.375 15:55:20 -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.375 15:55:20 -- setup/common.sh@18 -- # local node= 00:04:49.375 15:55:20 -- setup/common.sh@19 -- # local var val 00:04:49.375 15:55:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.375 15:55:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.375 15:55:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.375 15:55:20 -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.375 15:55:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.375 15:55:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60283776 kB' 'MemFree: 43291516 kB' 'MemAvailable: 46996176 kB' 'Buffers: 4100 kB' 'Cached: 10859792 kB' 'SwapCached: 0 kB' 'Active: 7627460 kB' 'Inactive: 3692420 kB' 'Active(anon): 7238696 kB' 'Inactive(anon): 0 kB' 'Active(file): 388764 kB' 'Inactive(file): 3692420 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 459684 kB' 'Mapped: 185660 kB' 'Shmem: 6782708 kB' 'KReclaimable: 240688 kB' 'Slab: 1013892 kB' 'SReclaimable: 240688 kB' 'SUnreclaim: 773204 kB' 'KernelStack: 21968 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37481916 kB' 'Committed_AS: 8446424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 217900 kB' 'VmallocChunk: 0 kB' 'Percpu: 77952 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1858932 kB' 'DirectMap2M: 18798592 kB' 'DirectMap1G: 49283072 kB' 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.375 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.375 15:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.376 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.376 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.377 15:55:20 -- setup/common.sh@33 -- # echo 1024 00:04:49.377 15:55:20 -- setup/common.sh@33 -- # return 0 00:04:49.377 15:55:20 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:49.377 15:55:20 -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.377 15:55:20 -- setup/hugepages.sh@27 -- # local node 00:04:49.377 15:55:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.377 15:55:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.377 15:55:20 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.377 15:55:20 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:49.377 15:55:20 -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.377 15:55:20 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.377 15:55:20 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.377 15:55:20 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.377 15:55:20 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.377 15:55:20 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.377 15:55:20 -- setup/common.sh@18 -- # local node=0 00:04:49.377 15:55:20 -- setup/common.sh@19 -- # local var val 00:04:49.377 15:55:20 -- setup/common.sh@20 -- # local mem_f mem 00:04:49.377 15:55:20 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.377 15:55:20 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.377 15:55:20 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.377 15:55:20 -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.377 15:55:20 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32585368 kB' 'MemFree: 27430044 kB' 'MemUsed: 5155324 kB' 'SwapCached: 0 kB' 'Active: 1834940 kB' 'Inactive: 183816 kB' 'Active(anon): 1684460 kB' 'Inactive(anon): 0 kB' 'Active(file): 150480 kB' 'Inactive(file): 183816 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1817748 kB' 'Mapped: 55160 kB' 'AnonPages: 204336 kB' 'Shmem: 1483452 kB' 'KernelStack: 13016 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 76648 kB' 'Slab: 444776 kB' 'SReclaimable: 76648 kB' 'SUnreclaim: 368128 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.377 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.377 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # continue 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # IFS=': ' 00:04:49.378 15:55:20 -- setup/common.sh@31 -- # read -r var val _ 00:04:49.378 15:55:20 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.378 15:55:20 -- setup/common.sh@33 -- # echo 0 00:04:49.378 15:55:20 -- setup/common.sh@33 -- # return 0 00:04:49.378 15:55:20 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.378 15:55:20 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.378 15:55:20 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.378 15:55:20 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.378 15:55:20 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:49.378 node0=1024 expecting 1024 00:04:49.378 15:55:20 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:49.378 00:04:49.378 real 0m7.453s 00:04:49.378 user 0m2.804s 00:04:49.378 sys 0m4.787s 00:04:49.378 15:55:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.378 15:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.378 ************************************ 00:04:49.378 END TEST no_shrink_alloc 00:04:49.378 ************************************ 00:04:49.378 15:55:20 -- setup/hugepages.sh@217 -- # clear_hp 00:04:49.378 15:55:20 -- setup/hugepages.sh@37 -- # local node hp 00:04:49.378 15:55:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.378 15:55:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.378 15:55:20 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.378 15:55:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.378 15:55:20 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.378 15:55:20 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:49.378 15:55:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.378 15:55:20 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.378 15:55:20 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:49.378 15:55:20 -- setup/hugepages.sh@41 -- # echo 0 00:04:49.378 15:55:20 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:49.378 15:55:20 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:49.378 00:04:49.378 real 0m28.955s 00:04:49.378 user 0m10.246s 00:04:49.378 sys 0m17.331s 00:04:49.378 15:55:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:04:49.378 15:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.378 ************************************ 00:04:49.378 END TEST hugepages 00:04:49.378 ************************************ 00:04:49.638 15:55:20 -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:49.638 15:55:20 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:49.638 15:55:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:49.638 15:55:20 -- common/autotest_common.sh@10 -- # set +x 00:04:49.638 ************************************ 00:04:49.638 START TEST driver 00:04:49.638 ************************************ 00:04:49.638 15:55:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/driver.sh 00:04:49.638 * Looking for test storage... 00:04:49.638 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:04:49.638 15:55:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:04:49.638 15:55:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:04:49.638 15:55:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:04:49.638 15:55:20 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:04:49.638 15:55:20 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:04:49.638 15:55:20 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:04:49.638 15:55:20 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:04:49.639 15:55:20 -- scripts/common.sh@335 -- # IFS=.-: 00:04:49.639 15:55:20 -- scripts/common.sh@335 -- # read -ra ver1 00:04:49.639 15:55:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.639 15:55:20 -- scripts/common.sh@336 -- # read -ra ver2 00:04:49.639 15:55:20 -- scripts/common.sh@337 -- # local 'op=<' 00:04:49.639 15:55:20 -- scripts/common.sh@339 -- # ver1_l=2 00:04:49.639 15:55:20 -- scripts/common.sh@340 -- # ver2_l=1 00:04:49.639 15:55:20 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:04:49.639 15:55:20 -- scripts/common.sh@343 -- # case "$op" in 00:04:49.639 15:55:20 -- scripts/common.sh@344 -- # : 1 00:04:49.639 15:55:20 -- scripts/common.sh@363 -- # (( v = 0 )) 00:04:49.639 15:55:20 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.639 15:55:20 -- scripts/common.sh@364 -- # decimal 1 00:04:49.639 15:55:20 -- scripts/common.sh@352 -- # local d=1 00:04:49.639 15:55:20 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.639 15:55:20 -- scripts/common.sh@354 -- # echo 1 00:04:49.639 15:55:20 -- scripts/common.sh@364 -- # ver1[v]=1 00:04:49.639 15:55:20 -- scripts/common.sh@365 -- # decimal 2 00:04:49.639 15:55:20 -- scripts/common.sh@352 -- # local d=2 00:04:49.639 15:55:20 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.639 15:55:20 -- scripts/common.sh@354 -- # echo 2 00:04:49.639 15:55:20 -- scripts/common.sh@365 -- # ver2[v]=2 00:04:49.639 15:55:20 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:04:49.639 15:55:20 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:04:49.639 15:55:20 -- scripts/common.sh@367 -- # return 0 00:04:49.639 15:55:20 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.639 15:55:20 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:04:49.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.639 --rc genhtml_branch_coverage=1 00:04:49.639 --rc genhtml_function_coverage=1 00:04:49.639 --rc genhtml_legend=1 00:04:49.639 --rc geninfo_all_blocks=1 00:04:49.639 --rc geninfo_unexecuted_blocks=1 00:04:49.639 00:04:49.639 ' 00:04:49.639 15:55:20 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:04:49.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.639 --rc genhtml_branch_coverage=1 00:04:49.639 --rc genhtml_function_coverage=1 00:04:49.639 --rc genhtml_legend=1 00:04:49.639 --rc geninfo_all_blocks=1 00:04:49.639 --rc geninfo_unexecuted_blocks=1 00:04:49.639 00:04:49.639 ' 00:04:49.639 15:55:20 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:04:49.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.639 --rc genhtml_branch_coverage=1 00:04:49.639 --rc genhtml_function_coverage=1 00:04:49.639 --rc genhtml_legend=1 00:04:49.639 --rc geninfo_all_blocks=1 00:04:49.639 --rc geninfo_unexecuted_blocks=1 00:04:49.639 00:04:49.639 ' 00:04:49.639 15:55:20 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:04:49.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.639 --rc genhtml_branch_coverage=1 00:04:49.639 --rc genhtml_function_coverage=1 00:04:49.639 --rc genhtml_legend=1 00:04:49.639 --rc geninfo_all_blocks=1 00:04:49.639 --rc geninfo_unexecuted_blocks=1 00:04:49.639 00:04:49.639 ' 00:04:49.639 15:55:20 -- setup/driver.sh@68 -- # setup reset 00:04:49.639 15:55:20 -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.639 15:55:20 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.918 15:55:25 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:54.918 15:55:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:04:54.918 15:55:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:04:54.918 15:55:25 -- common/autotest_common.sh@10 -- # set +x 00:04:54.918 ************************************ 00:04:54.918 START TEST guess_driver 00:04:54.918 ************************************ 00:04:54.918 15:55:25 -- common/autotest_common.sh@1114 -- # guess_driver 00:04:54.918 15:55:25 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:54.918 15:55:25 -- setup/driver.sh@47 -- # local fail=0 00:04:54.918 15:55:25 -- setup/driver.sh@49 -- # pick_driver 00:04:54.918 15:55:25 -- setup/driver.sh@36 -- # vfio 00:04:54.918 15:55:25 -- setup/driver.sh@21 -- # local iommu_grups 00:04:54.918 15:55:25 -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:54.918 15:55:25 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:54.918 15:55:25 -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:54.918 15:55:25 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:54.918 15:55:25 -- setup/driver.sh@29 -- # (( 176 > 0 )) 00:04:54.919 15:55:25 -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:54.919 15:55:25 -- setup/driver.sh@14 -- # mod vfio_pci 00:04:54.919 15:55:25 -- setup/driver.sh@12 -- # dep vfio_pci 00:04:54.919 15:55:25 -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:54.919 15:55:25 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:54.919 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:54.919 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:54.919 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:54.919 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:54.919 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:54.919 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:54.919 insmod /lib/modules/6.8.9-200.fc39.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:54.919 15:55:25 -- setup/driver.sh@30 -- # return 0 00:04:54.919 15:55:25 -- setup/driver.sh@37 -- # echo vfio-pci 00:04:54.919 15:55:25 -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:54.919 15:55:25 -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:54.919 15:55:25 -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:54.919 Looking for driver=vfio-pci 00:04:54.919 15:55:25 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.919 15:55:25 -- setup/driver.sh@45 -- # setup output config 00:04:54.919 15:55:25 -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.919 15:55:25 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:04:58.212 15:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.212 15:55:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.212 15:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.212 15:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.212 15:55:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.212 15:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.212 15:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.212 15:55:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.212 15:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.212 15:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.212 15:55:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.212 15:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.212 15:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.212 15:55:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.212 15:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.212 15:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.212 15:55:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.212 15:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.212 15:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.212 15:55:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.212 15:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.212 15:55:28 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.212 15:55:28 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.212 15:55:28 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.212 15:55:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.212 15:55:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.212 15:55:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.472 15:55:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.472 15:55:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.472 15:55:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.472 15:55:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.472 15:55:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.472 15:55:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.472 15:55:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.472 15:55:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.472 15:55:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.472 15:55:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.472 15:55:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.472 15:55:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.472 15:55:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.472 15:55:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.472 15:55:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.472 15:55:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.472 15:55:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.472 15:55:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:58.472 15:55:29 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:58.472 15:55:29 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:58.472 15:55:29 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.378 15:55:31 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:05:00.378 15:55:31 -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:05:00.378 15:55:31 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:05:00.378 15:55:31 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:05:00.378 15:55:31 -- setup/driver.sh@65 -- # setup reset 00:05:00.378 15:55:31 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:00.378 15:55:31 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:05.653 00:05:05.653 real 0m10.731s 00:05:05.653 user 0m2.803s 00:05:05.653 sys 0m5.245s 00:05:05.653 15:55:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.653 15:55:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.653 ************************************ 00:05:05.653 END TEST guess_driver 00:05:05.653 ************************************ 00:05:05.653 00:05:05.653 real 0m16.002s 00:05:05.653 user 0m4.353s 00:05:05.653 sys 0m8.143s 00:05:05.653 15:55:36 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:05.653 15:55:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.653 ************************************ 00:05:05.653 END TEST driver 00:05:05.653 ************************************ 00:05:05.653 15:55:36 -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:05.653 15:55:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:05.653 15:55:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:05.653 15:55:36 -- common/autotest_common.sh@10 -- # set +x 00:05:05.653 ************************************ 00:05:05.653 START TEST devices 00:05:05.653 ************************************ 00:05:05.653 15:55:36 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/devices.sh 00:05:05.653 * Looking for test storage... 00:05:05.653 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup 00:05:05.653 15:55:36 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:05:05.653 15:55:36 -- common/autotest_common.sh@1690 -- # lcov --version 00:05:05.653 15:55:36 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:05:05.653 15:55:36 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:05:05.653 15:55:36 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:05:05.653 15:55:36 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:05:05.653 15:55:36 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:05:05.653 15:55:36 -- scripts/common.sh@335 -- # IFS=.-: 00:05:05.653 15:55:36 -- scripts/common.sh@335 -- # read -ra ver1 00:05:05.653 15:55:36 -- scripts/common.sh@336 -- # IFS=.-: 00:05:05.653 15:55:36 -- scripts/common.sh@336 -- # read -ra ver2 00:05:05.653 15:55:36 -- scripts/common.sh@337 -- # local 'op=<' 00:05:05.653 15:55:36 -- scripts/common.sh@339 -- # ver1_l=2 00:05:05.653 15:55:36 -- scripts/common.sh@340 -- # ver2_l=1 00:05:05.653 15:55:36 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:05:05.653 15:55:36 -- scripts/common.sh@343 -- # case "$op" in 00:05:05.653 15:55:36 -- scripts/common.sh@344 -- # : 1 00:05:05.653 15:55:36 -- scripts/common.sh@363 -- # (( v = 0 )) 00:05:05.653 15:55:36 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:05.653 15:55:36 -- scripts/common.sh@364 -- # decimal 1 00:05:05.653 15:55:36 -- scripts/common.sh@352 -- # local d=1 00:05:05.653 15:55:36 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:05.653 15:55:36 -- scripts/common.sh@354 -- # echo 1 00:05:05.653 15:55:36 -- scripts/common.sh@364 -- # ver1[v]=1 00:05:05.653 15:55:36 -- scripts/common.sh@365 -- # decimal 2 00:05:05.653 15:55:36 -- scripts/common.sh@352 -- # local d=2 00:05:05.653 15:55:36 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:05.653 15:55:36 -- scripts/common.sh@354 -- # echo 2 00:05:05.653 15:55:36 -- scripts/common.sh@365 -- # ver2[v]=2 00:05:05.653 15:55:36 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:05:05.653 15:55:36 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:05:05.653 15:55:36 -- scripts/common.sh@367 -- # return 0 00:05:05.653 15:55:36 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:05.653 15:55:36 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:05:05.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.653 --rc genhtml_branch_coverage=1 00:05:05.653 --rc genhtml_function_coverage=1 00:05:05.653 --rc genhtml_legend=1 00:05:05.653 --rc geninfo_all_blocks=1 00:05:05.653 --rc geninfo_unexecuted_blocks=1 00:05:05.653 00:05:05.653 ' 00:05:05.653 15:55:36 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:05:05.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.653 --rc genhtml_branch_coverage=1 00:05:05.653 --rc genhtml_function_coverage=1 00:05:05.653 --rc genhtml_legend=1 00:05:05.653 --rc geninfo_all_blocks=1 00:05:05.653 --rc geninfo_unexecuted_blocks=1 00:05:05.653 00:05:05.653 ' 00:05:05.653 15:55:36 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:05:05.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.653 --rc genhtml_branch_coverage=1 00:05:05.653 --rc genhtml_function_coverage=1 00:05:05.653 --rc genhtml_legend=1 00:05:05.653 --rc geninfo_all_blocks=1 00:05:05.653 --rc geninfo_unexecuted_blocks=1 00:05:05.653 00:05:05.653 ' 00:05:05.653 15:55:36 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:05:05.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:05.653 --rc genhtml_branch_coverage=1 00:05:05.653 --rc genhtml_function_coverage=1 00:05:05.653 --rc genhtml_legend=1 00:05:05.653 --rc geninfo_all_blocks=1 00:05:05.653 --rc geninfo_unexecuted_blocks=1 00:05:05.653 00:05:05.653 ' 00:05:05.653 15:55:36 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:05:05.653 15:55:36 -- setup/devices.sh@192 -- # setup reset 00:05:05.653 15:55:36 -- setup/common.sh@9 -- # [[ reset == output ]] 00:05:05.653 15:55:36 -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:09.848 15:55:40 -- setup/devices.sh@194 -- # get_zoned_devs 00:05:09.848 15:55:40 -- common/autotest_common.sh@1664 -- # zoned_devs=() 00:05:09.848 15:55:40 -- common/autotest_common.sh@1664 -- # local -gA zoned_devs 00:05:09.848 15:55:40 -- common/autotest_common.sh@1665 -- # local nvme bdf 00:05:09.848 15:55:40 -- common/autotest_common.sh@1667 -- # for nvme in /sys/block/nvme* 00:05:09.848 15:55:40 -- common/autotest_common.sh@1668 -- # is_block_zoned nvme0n1 00:05:09.848 15:55:40 -- common/autotest_common.sh@1657 -- # local device=nvme0n1 00:05:09.848 15:55:40 -- common/autotest_common.sh@1659 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:09.848 15:55:40 -- common/autotest_common.sh@1660 -- # [[ none != none ]] 00:05:09.848 15:55:40 -- setup/devices.sh@196 -- # blocks=() 00:05:09.848 15:55:40 -- setup/devices.sh@196 -- # declare -a blocks 00:05:09.848 15:55:40 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:09.848 15:55:40 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:09.848 15:55:40 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:09.848 15:55:40 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:09.848 15:55:40 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:09.848 15:55:40 -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:09.848 15:55:40 -- setup/devices.sh@202 -- # pci=0000:d8:00.0 00:05:09.848 15:55:40 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\d\8\:\0\0\.\0* ]] 00:05:09.848 15:55:40 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:09.848 15:55:40 -- scripts/common.sh@380 -- # local block=nvme0n1 pt 00:05:09.848 15:55:40 -- scripts/common.sh@389 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:09.848 No valid GPT data, bailing 00:05:09.848 15:55:40 -- scripts/common.sh@393 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:09.848 15:55:40 -- scripts/common.sh@393 -- # pt= 00:05:09.848 15:55:40 -- scripts/common.sh@394 -- # return 1 00:05:09.848 15:55:40 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:09.848 15:55:40 -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:09.848 15:55:40 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:09.848 15:55:40 -- setup/common.sh@80 -- # echo 2000398934016 00:05:09.848 15:55:40 -- setup/devices.sh@204 -- # (( 2000398934016 >= min_disk_size )) 00:05:09.848 15:55:40 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:09.848 15:55:40 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:d8:00.0 00:05:09.848 15:55:40 -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:09.848 15:55:40 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:09.848 15:55:40 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:09.848 15:55:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:09.848 15:55:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:09.848 15:55:40 -- common/autotest_common.sh@10 -- # set +x 00:05:09.848 ************************************ 00:05:09.848 START TEST nvme_mount 00:05:09.848 ************************************ 00:05:09.848 15:55:40 -- common/autotest_common.sh@1114 -- # nvme_mount 00:05:09.848 15:55:40 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:09.848 15:55:40 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:09.848 15:55:40 -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:09.848 15:55:40 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:09.848 15:55:40 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:09.848 15:55:40 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:09.848 15:55:40 -- setup/common.sh@40 -- # local part_no=1 00:05:09.848 15:55:40 -- setup/common.sh@41 -- # local size=1073741824 00:05:09.848 15:55:40 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:09.848 15:55:40 -- setup/common.sh@44 -- # parts=() 00:05:09.848 15:55:40 -- setup/common.sh@44 -- # local parts 00:05:09.848 15:55:40 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:09.848 15:55:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.848 15:55:40 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:09.848 15:55:40 -- setup/common.sh@46 -- # (( part++ )) 00:05:09.848 15:55:40 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:09.848 15:55:40 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:09.848 15:55:40 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:09.848 15:55:40 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:10.788 Creating new GPT entries in memory. 00:05:10.788 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:10.788 other utilities. 00:05:10.788 15:55:41 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:10.788 15:55:41 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.788 15:55:41 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:10.788 15:55:41 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:10.788 15:55:41 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:11.726 Creating new GPT entries in memory. 00:05:11.726 The operation has completed successfully. 00:05:11.726 15:55:42 -- setup/common.sh@57 -- # (( part++ )) 00:05:11.726 15:55:42 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:11.726 15:55:42 -- setup/common.sh@62 -- # wait 1159147 00:05:11.985 15:55:42 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.985 15:55:42 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:11.985 15:55:42 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.985 15:55:42 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:11.985 15:55:42 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:11.985 15:55:42 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.985 15:55:42 -- setup/devices.sh@105 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.985 15:55:42 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:11.985 15:55:42 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:11.985 15:55:42 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:11.985 15:55:42 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:11.985 15:55:42 -- setup/devices.sh@53 -- # local found=0 00:05:11.985 15:55:42 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:11.985 15:55:42 -- setup/devices.sh@56 -- # : 00:05:11.985 15:55:42 -- setup/devices.sh@59 -- # local pci status 00:05:11.986 15:55:42 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.986 15:55:42 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:11.986 15:55:42 -- setup/devices.sh@47 -- # setup output config 00:05:11.986 15:55:42 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.986 15:55:42 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:15.276 15:55:45 -- setup/devices.sh@63 -- # found=1 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:45 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:45 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.276 15:55:46 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:15.276 15:55:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.536 15:55:46 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:15.536 15:55:46 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:15.536 15:55:46 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.536 15:55:46 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.536 15:55:46 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.536 15:55:46 -- setup/devices.sh@110 -- # cleanup_nvme 00:05:15.536 15:55:46 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.536 15:55:46 -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.536 15:55:46 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:15.536 15:55:46 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:15.536 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:15.536 15:55:46 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:15.536 15:55:46 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:15.795 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:15.795 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:15.795 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:15.795 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:15.795 15:55:46 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:15.796 15:55:46 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:15.796 15:55:46 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.796 15:55:46 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:15.796 15:55:46 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:15.796 15:55:46 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.796 15:55:46 -- setup/devices.sh@116 -- # verify 0000:d8:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.796 15:55:46 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:15.796 15:55:46 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:15.796 15:55:46 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:15.796 15:55:46 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:15.796 15:55:46 -- setup/devices.sh@53 -- # local found=0 00:05:15.796 15:55:46 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:15.796 15:55:46 -- setup/devices.sh@56 -- # : 00:05:15.796 15:55:46 -- setup/devices.sh@59 -- # local pci status 00:05:15.796 15:55:46 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:15.796 15:55:46 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:15.796 15:55:46 -- setup/devices.sh@47 -- # setup output config 00:05:15.796 15:55:46 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:15.796 15:55:46 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:19.088 15:55:49 -- setup/devices.sh@63 -- # found=1 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.088 15:55:49 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:19.088 15:55:49 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.347 15:55:50 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:19.347 15:55:50 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:19.347 15:55:50 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.347 15:55:50 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:19.347 15:55:50 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:19.347 15:55:50 -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:19.347 15:55:50 -- setup/devices.sh@125 -- # verify 0000:d8:00.0 data@nvme0n1 '' '' 00:05:19.347 15:55:50 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:19.347 15:55:50 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:19.347 15:55:50 -- setup/devices.sh@50 -- # local mount_point= 00:05:19.347 15:55:50 -- setup/devices.sh@51 -- # local test_file= 00:05:19.347 15:55:50 -- setup/devices.sh@53 -- # local found=0 00:05:19.347 15:55:50 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:19.347 15:55:50 -- setup/devices.sh@59 -- # local pci status 00:05:19.347 15:55:50 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:19.347 15:55:50 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:19.347 15:55:50 -- setup/devices.sh@47 -- # setup output config 00:05:19.347 15:55:50 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:19.347 15:55:50 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:22.710 15:55:53 -- setup/devices.sh@63 -- # found=1 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.710 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.710 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.711 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.711 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.711 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.711 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.711 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.711 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.711 15:55:53 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:22.711 15:55:53 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:22.970 15:55:53 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:22.970 15:55:53 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:22.970 15:55:53 -- setup/devices.sh@68 -- # return 0 00:05:22.970 15:55:53 -- setup/devices.sh@128 -- # cleanup_nvme 00:05:22.970 15:55:53 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:22.970 15:55:53 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:22.970 15:55:53 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:22.970 15:55:53 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:22.970 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:22.970 00:05:22.970 real 0m13.171s 00:05:22.970 user 0m3.859s 00:05:22.970 sys 0m7.258s 00:05:22.970 15:55:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:22.970 15:55:53 -- common/autotest_common.sh@10 -- # set +x 00:05:22.970 ************************************ 00:05:22.970 END TEST nvme_mount 00:05:22.970 ************************************ 00:05:22.970 15:55:53 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:22.970 15:55:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:05:22.970 15:55:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:05:22.970 15:55:53 -- common/autotest_common.sh@10 -- # set +x 00:05:22.970 ************************************ 00:05:22.970 START TEST dm_mount 00:05:22.970 ************************************ 00:05:22.970 15:55:53 -- common/autotest_common.sh@1114 -- # dm_mount 00:05:22.970 15:55:53 -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:22.970 15:55:53 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:22.970 15:55:53 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:22.970 15:55:53 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:22.970 15:55:53 -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:22.970 15:55:53 -- setup/common.sh@40 -- # local part_no=2 00:05:22.970 15:55:53 -- setup/common.sh@41 -- # local size=1073741824 00:05:22.970 15:55:53 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:22.970 15:55:53 -- setup/common.sh@44 -- # parts=() 00:05:22.970 15:55:53 -- setup/common.sh@44 -- # local parts 00:05:22.970 15:55:53 -- setup/common.sh@46 -- # (( part = 1 )) 00:05:22.970 15:55:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.970 15:55:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.970 15:55:53 -- setup/common.sh@46 -- # (( part++ )) 00:05:22.970 15:55:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.970 15:55:53 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:22.970 15:55:53 -- setup/common.sh@46 -- # (( part++ )) 00:05:22.970 15:55:53 -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:22.970 15:55:53 -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:22.970 15:55:53 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:22.970 15:55:53 -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:23.909 Creating new GPT entries in memory. 00:05:23.909 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:23.909 other utilities. 00:05:23.909 15:55:54 -- setup/common.sh@57 -- # (( part = 1 )) 00:05:23.909 15:55:54 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:23.909 15:55:54 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:23.909 15:55:54 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:23.909 15:55:54 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:25.289 Creating new GPT entries in memory. 00:05:25.289 The operation has completed successfully. 00:05:25.289 15:55:55 -- setup/common.sh@57 -- # (( part++ )) 00:05:25.289 15:55:55 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:25.289 15:55:55 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:25.289 15:55:55 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:25.289 15:55:55 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:26.225 The operation has completed successfully. 00:05:26.225 15:55:56 -- setup/common.sh@57 -- # (( part++ )) 00:05:26.225 15:55:56 -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:26.225 15:55:56 -- setup/common.sh@62 -- # wait 1163901 00:05:26.225 15:55:56 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:26.225 15:55:56 -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:26.225 15:55:56 -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.225 15:55:56 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:26.225 15:55:56 -- setup/devices.sh@160 -- # for t in {1..5} 00:05:26.225 15:55:56 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.225 15:55:56 -- setup/devices.sh@161 -- # break 00:05:26.225 15:55:56 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.225 15:55:56 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:26.225 15:55:56 -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:05:26.225 15:55:56 -- setup/devices.sh@166 -- # dm=dm-2 00:05:26.225 15:55:56 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:05:26.225 15:55:56 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:05:26.225 15:55:56 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:26.225 15:55:56 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount size= 00:05:26.225 15:55:56 -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:26.225 15:55:56 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:26.225 15:55:56 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:26.225 15:55:56 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:26.225 15:55:56 -- setup/devices.sh@174 -- # verify 0000:d8:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.225 15:55:56 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:26.225 15:55:56 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:26.225 15:55:56 -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:26.225 15:55:56 -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:26.225 15:55:56 -- setup/devices.sh@53 -- # local found=0 00:05:26.225 15:55:56 -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:26.225 15:55:56 -- setup/devices.sh@56 -- # : 00:05:26.225 15:55:56 -- setup/devices.sh@59 -- # local pci status 00:05:26.225 15:55:56 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:26.225 15:55:56 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:26.225 15:55:56 -- setup/devices.sh@47 -- # setup output config 00:05:26.225 15:55:56 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:26.225 15:55:56 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:29.513 15:56:00 -- setup/devices.sh@63 -- # found=1 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.513 15:56:00 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:29.513 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.773 15:56:00 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:29.773 15:56:00 -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:29.773 15:56:00 -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:29.773 15:56:00 -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:29.773 15:56:00 -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:29.773 15:56:00 -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:29.773 15:56:00 -- setup/devices.sh@184 -- # verify 0000:d8:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:05:29.773 15:56:00 -- setup/devices.sh@48 -- # local dev=0000:d8:00.0 00:05:29.773 15:56:00 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:05:29.774 15:56:00 -- setup/devices.sh@50 -- # local mount_point= 00:05:29.774 15:56:00 -- setup/devices.sh@51 -- # local test_file= 00:05:29.774 15:56:00 -- setup/devices.sh@53 -- # local found=0 00:05:29.774 15:56:00 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:29.774 15:56:00 -- setup/devices.sh@59 -- # local pci status 00:05:29.774 15:56:00 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:29.774 15:56:00 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:d8:00.0 00:05:29.774 15:56:00 -- setup/devices.sh@47 -- # setup output config 00:05:29.774 15:56:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:05:29.774 15:56:00 -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh config 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:d8:00.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:05:33.065 15:56:03 -- setup/devices.sh@63 -- # found=1 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.065 15:56:03 -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\d\8\:\0\0\.\0 ]] 00:05:33.065 15:56:03 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:33.324 15:56:03 -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:33.324 15:56:03 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:33.324 15:56:03 -- setup/devices.sh@68 -- # return 0 00:05:33.324 15:56:03 -- setup/devices.sh@187 -- # cleanup_dm 00:05:33.324 15:56:03 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:33.324 15:56:03 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.324 15:56:03 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:33.324 15:56:04 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.324 15:56:04 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:33.324 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:33.324 15:56:04 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.324 15:56:04 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:33.324 00:05:33.324 real 0m10.362s 00:05:33.324 user 0m2.620s 00:05:33.324 sys 0m4.859s 00:05:33.324 15:56:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.324 15:56:04 -- common/autotest_common.sh@10 -- # set +x 00:05:33.324 ************************************ 00:05:33.324 END TEST dm_mount 00:05:33.324 ************************************ 00:05:33.324 15:56:04 -- setup/devices.sh@1 -- # cleanup 00:05:33.324 15:56:04 -- setup/devices.sh@11 -- # cleanup_nvme 00:05:33.324 15:56:04 -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/nvme_mount 00:05:33.324 15:56:04 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.324 15:56:04 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:33.324 15:56:04 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.324 15:56:04 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:33.584 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:33.584 /dev/nvme0n1: 8 bytes were erased at offset 0x1d1c1115e00 (gpt): 45 46 49 20 50 41 52 54 00:05:33.584 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:33.584 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:33.584 15:56:04 -- setup/devices.sh@12 -- # cleanup_dm 00:05:33.584 15:56:04 -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/setup/dm_mount 00:05:33.584 15:56:04 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:33.584 15:56:04 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:33.584 15:56:04 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:33.584 15:56:04 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:33.584 15:56:04 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:33.844 00:05:33.844 real 0m28.147s 00:05:33.844 user 0m8.117s 00:05:33.844 sys 0m15.031s 00:05:33.844 15:56:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.844 15:56:04 -- common/autotest_common.sh@10 -- # set +x 00:05:33.844 ************************************ 00:05:33.844 END TEST devices 00:05:33.844 ************************************ 00:05:33.844 00:05:33.844 real 1m39.582s 00:05:33.844 user 0m30.989s 00:05:33.844 sys 0m56.592s 00:05:33.844 15:56:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:05:33.844 15:56:04 -- common/autotest_common.sh@10 -- # set +x 00:05:33.844 ************************************ 00:05:33.844 END TEST setup.sh 00:05:33.844 ************************************ 00:05:33.844 15:56:04 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh status 00:05:37.136 Hugepages 00:05:37.136 node hugesize free / total 00:05:37.136 node0 1048576kB 0 / 0 00:05:37.136 node0 2048kB 2048 / 2048 00:05:37.136 node1 1048576kB 0 / 0 00:05:37.136 node1 2048kB 0 / 0 00:05:37.136 00:05:37.136 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:37.136 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:37.136 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:37.136 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:37.136 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:37.136 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:37.136 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:37.136 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:37.136 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:37.136 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:37.136 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:37.136 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:37.136 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:37.136 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:37.136 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:37.136 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:37.136 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:37.395 NVMe 0000:d8:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:37.395 15:56:07 -- spdk/autotest.sh@128 -- # uname -s 00:05:37.395 15:56:07 -- spdk/autotest.sh@128 -- # [[ Linux == Linux ]] 00:05:37.395 15:56:07 -- spdk/autotest.sh@130 -- # nvme_namespace_revert 00:05:37.395 15:56:07 -- common/autotest_common.sh@1526 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:40.686 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:40.686 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:40.686 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:40.686 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:40.686 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:40.686 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:40.686 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:40.946 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:40.946 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:40.946 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:40.946 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:40.946 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:40.946 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:40.946 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:40.946 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:40.946 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:42.855 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:42.855 15:56:13 -- common/autotest_common.sh@1527 -- # sleep 1 00:05:43.793 15:56:14 -- common/autotest_common.sh@1528 -- # bdfs=() 00:05:43.793 15:56:14 -- common/autotest_common.sh@1528 -- # local bdfs 00:05:43.793 15:56:14 -- common/autotest_common.sh@1529 -- # bdfs=($(get_nvme_bdfs)) 00:05:43.793 15:56:14 -- common/autotest_common.sh@1529 -- # get_nvme_bdfs 00:05:43.793 15:56:14 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:43.793 15:56:14 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:43.793 15:56:14 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:43.793 15:56:14 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:43.793 15:56:14 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:44.053 15:56:14 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:44.053 15:56:14 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:44.053 15:56:14 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh reset 00:05:47.347 Waiting for block devices as requested 00:05:47.347 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:47.347 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:47.607 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:47.607 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:47.867 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:47.867 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:47.867 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:48.126 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:48.126 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:05:48.126 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:05:48.386 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:05:48.386 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:05:48.386 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:05:48.644 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:05:48.644 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:05:48.644 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:05:48.904 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:05:48.904 15:56:19 -- common/autotest_common.sh@1533 -- # for bdf in "${bdfs[@]}" 00:05:48.904 15:56:19 -- common/autotest_common.sh@1534 -- # get_nvme_ctrlr_from_bdf 0000:d8:00.0 00:05:48.904 15:56:19 -- common/autotest_common.sh@1497 -- # readlink -f /sys/class/nvme/nvme0 00:05:48.904 15:56:19 -- common/autotest_common.sh@1497 -- # grep 0000:d8:00.0/nvme/nvme 00:05:48.904 15:56:19 -- common/autotest_common.sh@1497 -- # bdf_sysfs_path=/sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:48.904 15:56:19 -- common/autotest_common.sh@1498 -- # [[ -z /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 ]] 00:05:48.904 15:56:19 -- common/autotest_common.sh@1502 -- # basename /sys/devices/pci0000:d7/0000:d7:00.0/0000:d8:00.0/nvme/nvme0 00:05:48.904 15:56:19 -- common/autotest_common.sh@1502 -- # printf '%s\n' nvme0 00:05:48.904 15:56:19 -- common/autotest_common.sh@1534 -- # nvme_ctrlr=/dev/nvme0 00:05:48.904 15:56:19 -- common/autotest_common.sh@1535 -- # [[ -z /dev/nvme0 ]] 00:05:48.904 15:56:19 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:48.904 15:56:19 -- common/autotest_common.sh@1540 -- # grep oacs 00:05:48.904 15:56:19 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:48.904 15:56:19 -- common/autotest_common.sh@1540 -- # oacs=' 0xe' 00:05:48.904 15:56:19 -- common/autotest_common.sh@1541 -- # oacs_ns_manage=8 00:05:48.904 15:56:19 -- common/autotest_common.sh@1543 -- # [[ 8 -ne 0 ]] 00:05:48.904 15:56:19 -- common/autotest_common.sh@1549 -- # nvme id-ctrl /dev/nvme0 00:05:48.904 15:56:19 -- common/autotest_common.sh@1549 -- # grep unvmcap 00:05:48.904 15:56:19 -- common/autotest_common.sh@1549 -- # cut -d: -f2 00:05:48.904 15:56:19 -- common/autotest_common.sh@1549 -- # unvmcap=' 0' 00:05:48.904 15:56:19 -- common/autotest_common.sh@1550 -- # [[ 0 -eq 0 ]] 00:05:48.904 15:56:19 -- common/autotest_common.sh@1552 -- # continue 00:05:48.904 15:56:19 -- spdk/autotest.sh@133 -- # timing_exit pre_cleanup 00:05:48.904 15:56:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.904 15:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:49.163 15:56:19 -- spdk/autotest.sh@136 -- # timing_enter afterboot 00:05:49.163 15:56:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:49.163 15:56:19 -- common/autotest_common.sh@10 -- # set +x 00:05:49.163 15:56:19 -- spdk/autotest.sh@137 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/setup.sh 00:05:52.457 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:52.457 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:52.457 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:52.457 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:52.457 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:52.457 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:52.457 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:52.717 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:52.717 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:05:52.717 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:05:52.717 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:05:52.717 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:05:52.717 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:05:52.717 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:05:52.717 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:05:52.717 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:05:54.625 0000:d8:00.0 (8086 0a54): nvme -> vfio-pci 00:05:54.886 15:56:25 -- spdk/autotest.sh@138 -- # timing_exit afterboot 00:05:54.886 15:56:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:54.886 15:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:54.886 15:56:25 -- spdk/autotest.sh@142 -- # opal_revert_cleanup 00:05:54.886 15:56:25 -- common/autotest_common.sh@1586 -- # mapfile -t bdfs 00:05:54.886 15:56:25 -- common/autotest_common.sh@1586 -- # get_nvme_bdfs_by_id 0x0a54 00:05:54.886 15:56:25 -- common/autotest_common.sh@1572 -- # bdfs=() 00:05:54.886 15:56:25 -- common/autotest_common.sh@1572 -- # local bdfs 00:05:54.886 15:56:25 -- common/autotest_common.sh@1574 -- # get_nvme_bdfs 00:05:54.886 15:56:25 -- common/autotest_common.sh@1508 -- # bdfs=() 00:05:54.886 15:56:25 -- common/autotest_common.sh@1508 -- # local bdfs 00:05:54.886 15:56:25 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:54.886 15:56:25 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:54.886 15:56:25 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:05:54.886 15:56:25 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:05:54.886 15:56:25 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:05:54.886 15:56:25 -- common/autotest_common.sh@1574 -- # for bdf in $(get_nvme_bdfs) 00:05:54.886 15:56:25 -- common/autotest_common.sh@1575 -- # cat /sys/bus/pci/devices/0000:d8:00.0/device 00:05:54.886 15:56:25 -- common/autotest_common.sh@1575 -- # device=0x0a54 00:05:54.886 15:56:25 -- common/autotest_common.sh@1576 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:54.886 15:56:25 -- common/autotest_common.sh@1577 -- # bdfs+=($bdf) 00:05:54.886 15:56:25 -- common/autotest_common.sh@1581 -- # printf '%s\n' 0000:d8:00.0 00:05:54.886 15:56:25 -- common/autotest_common.sh@1587 -- # [[ -z 0000:d8:00.0 ]] 00:05:54.886 15:56:25 -- common/autotest_common.sh@1592 -- # spdk_tgt_pid=1173901 00:05:54.886 15:56:25 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:05:54.887 15:56:25 -- common/autotest_common.sh@1593 -- # waitforlisten 1173901 00:05:54.887 15:56:25 -- common/autotest_common.sh@829 -- # '[' -z 1173901 ']' 00:05:54.887 15:56:25 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.887 15:56:25 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.887 15:56:25 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.887 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.887 15:56:25 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.887 15:56:25 -- common/autotest_common.sh@10 -- # set +x 00:05:55.147 [2024-11-20 15:56:25.700628] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:05:55.147 [2024-11-20 15:56:25.700683] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1173901 ] 00:05:55.147 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.147 [2024-11-20 15:56:25.786267] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.147 [2024-11-20 15:56:25.824604] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:05:55.147 [2024-11-20 15:56:25.824717] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.717 15:56:26 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.717 15:56:26 -- common/autotest_common.sh@862 -- # return 0 00:05:55.717 15:56:26 -- common/autotest_common.sh@1595 -- # bdf_id=0 00:05:55.717 15:56:26 -- common/autotest_common.sh@1596 -- # for bdf in "${bdfs[@]}" 00:05:55.717 15:56:26 -- common/autotest_common.sh@1597 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:d8:00.0 00:05:59.129 nvme0n1 00:05:59.129 15:56:29 -- common/autotest_common.sh@1599 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:59.129 [2024-11-20 15:56:29.667693] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:05:59.129 request: 00:05:59.129 { 00:05:59.129 "nvme_ctrlr_name": "nvme0", 00:05:59.129 "password": "test", 00:05:59.129 "method": "bdev_nvme_opal_revert", 00:05:59.129 "req_id": 1 00:05:59.129 } 00:05:59.129 Got JSON-RPC error response 00:05:59.129 response: 00:05:59.129 { 00:05:59.129 "code": -32602, 00:05:59.129 "message": "Invalid parameters" 00:05:59.129 } 00:05:59.130 15:56:29 -- common/autotest_common.sh@1599 -- # true 00:05:59.130 15:56:29 -- common/autotest_common.sh@1600 -- # (( ++bdf_id )) 00:05:59.130 15:56:29 -- common/autotest_common.sh@1603 -- # killprocess 1173901 00:05:59.130 15:56:29 -- common/autotest_common.sh@936 -- # '[' -z 1173901 ']' 00:05:59.130 15:56:29 -- common/autotest_common.sh@940 -- # kill -0 1173901 00:05:59.130 15:56:29 -- common/autotest_common.sh@941 -- # uname 00:05:59.130 15:56:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:05:59.130 15:56:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1173901 00:05:59.130 15:56:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:05:59.130 15:56:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:05:59.130 15:56:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1173901' 00:05:59.130 killing process with pid 1173901 00:05:59.130 15:56:29 -- common/autotest_common.sh@955 -- # kill 1173901 00:05:59.130 15:56:29 -- common/autotest_common.sh@960 -- # wait 1173901 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.130 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:59.131 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:06:01.668 15:56:32 -- spdk/autotest.sh@148 -- # '[' 0 -eq 1 ']' 00:06:01.668 15:56:32 -- spdk/autotest.sh@152 -- # '[' 1 -eq 1 ']' 00:06:01.668 15:56:32 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:01.668 15:56:32 -- spdk/autotest.sh@153 -- # [[ 0 -eq 1 ]] 00:06:01.668 15:56:32 -- spdk/autotest.sh@160 -- # timing_enter lib 00:06:01.668 15:56:32 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:01.668 15:56:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.668 15:56:32 -- spdk/autotest.sh@162 -- # run_test env /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:01.668 15:56:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.668 15:56:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.668 15:56:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.668 ************************************ 00:06:01.668 START TEST env 00:06:01.668 ************************************ 00:06:01.668 15:56:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env.sh 00:06:01.668 * Looking for test storage... 00:06:01.668 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env 00:06:01.668 15:56:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:01.668 15:56:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:01.668 15:56:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:01.927 15:56:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:01.927 15:56:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:01.927 15:56:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:01.927 15:56:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:01.927 15:56:32 -- scripts/common.sh@335 -- # IFS=.-: 00:06:01.927 15:56:32 -- scripts/common.sh@335 -- # read -ra ver1 00:06:01.927 15:56:32 -- scripts/common.sh@336 -- # IFS=.-: 00:06:01.927 15:56:32 -- scripts/common.sh@336 -- # read -ra ver2 00:06:01.927 15:56:32 -- scripts/common.sh@337 -- # local 'op=<' 00:06:01.927 15:56:32 -- scripts/common.sh@339 -- # ver1_l=2 00:06:01.927 15:56:32 -- scripts/common.sh@340 -- # ver2_l=1 00:06:01.927 15:56:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:01.927 15:56:32 -- scripts/common.sh@343 -- # case "$op" in 00:06:01.927 15:56:32 -- scripts/common.sh@344 -- # : 1 00:06:01.927 15:56:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:01.927 15:56:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:01.927 15:56:32 -- scripts/common.sh@364 -- # decimal 1 00:06:01.927 15:56:32 -- scripts/common.sh@352 -- # local d=1 00:06:01.927 15:56:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:01.927 15:56:32 -- scripts/common.sh@354 -- # echo 1 00:06:01.927 15:56:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:01.927 15:56:32 -- scripts/common.sh@365 -- # decimal 2 00:06:01.927 15:56:32 -- scripts/common.sh@352 -- # local d=2 00:06:01.927 15:56:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:01.927 15:56:32 -- scripts/common.sh@354 -- # echo 2 00:06:01.927 15:56:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:01.927 15:56:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:01.927 15:56:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:01.927 15:56:32 -- scripts/common.sh@367 -- # return 0 00:06:01.927 15:56:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:01.927 15:56:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:01.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.927 --rc genhtml_branch_coverage=1 00:06:01.927 --rc genhtml_function_coverage=1 00:06:01.927 --rc genhtml_legend=1 00:06:01.927 --rc geninfo_all_blocks=1 00:06:01.927 --rc geninfo_unexecuted_blocks=1 00:06:01.927 00:06:01.927 ' 00:06:01.927 15:56:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:01.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.927 --rc genhtml_branch_coverage=1 00:06:01.927 --rc genhtml_function_coverage=1 00:06:01.927 --rc genhtml_legend=1 00:06:01.927 --rc geninfo_all_blocks=1 00:06:01.927 --rc geninfo_unexecuted_blocks=1 00:06:01.927 00:06:01.927 ' 00:06:01.927 15:56:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:01.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.927 --rc genhtml_branch_coverage=1 00:06:01.927 --rc genhtml_function_coverage=1 00:06:01.927 --rc genhtml_legend=1 00:06:01.927 --rc geninfo_all_blocks=1 00:06:01.927 --rc geninfo_unexecuted_blocks=1 00:06:01.927 00:06:01.927 ' 00:06:01.927 15:56:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:01.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:01.927 --rc genhtml_branch_coverage=1 00:06:01.927 --rc genhtml_function_coverage=1 00:06:01.927 --rc genhtml_legend=1 00:06:01.927 --rc geninfo_all_blocks=1 00:06:01.927 --rc geninfo_unexecuted_blocks=1 00:06:01.927 00:06:01.927 ' 00:06:01.927 15:56:32 -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:01.927 15:56:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.927 15:56:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.927 15:56:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.927 ************************************ 00:06:01.927 START TEST env_memory 00:06:01.927 ************************************ 00:06:01.927 15:56:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/memory/memory_ut 00:06:01.927 00:06:01.927 00:06:01.927 CUnit - A unit testing framework for C - Version 2.1-3 00:06:01.927 http://cunit.sourceforge.net/ 00:06:01.927 00:06:01.927 00:06:01.927 Suite: memory 00:06:01.927 Test: alloc and free memory map ...[2024-11-20 15:56:32.575692] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:01.927 passed 00:06:01.927 Test: mem map translation ...[2024-11-20 15:56:32.593835] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:01.927 [2024-11-20 15:56:32.593848] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:01.927 [2024-11-20 15:56:32.593882] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:01.927 [2024-11-20 15:56:32.593890] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:01.927 passed 00:06:01.927 Test: mem map registration ...[2024-11-20 15:56:32.628748] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:06:01.927 [2024-11-20 15:56:32.628761] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:06:01.927 passed 00:06:01.927 Test: mem map adjacent registrations ...passed 00:06:01.927 00:06:01.927 Run Summary: Type Total Ran Passed Failed Inactive 00:06:01.927 suites 1 1 n/a 0 0 00:06:01.927 tests 4 4 4 0 0 00:06:01.927 asserts 152 152 152 0 n/a 00:06:01.927 00:06:01.927 Elapsed time = 0.128 seconds 00:06:01.927 00:06:01.927 real 0m0.136s 00:06:01.927 user 0m0.128s 00:06:01.927 sys 0m0.008s 00:06:01.927 15:56:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:01.927 15:56:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.927 ************************************ 00:06:01.927 END TEST env_memory 00:06:01.927 ************************************ 00:06:01.927 15:56:32 -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:01.927 15:56:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:01.927 15:56:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:01.927 15:56:32 -- common/autotest_common.sh@10 -- # set +x 00:06:01.927 ************************************ 00:06:01.927 START TEST env_vtophys 00:06:01.927 ************************************ 00:06:01.927 15:56:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:02.187 EAL: lib.eal log level changed from notice to debug 00:06:02.187 EAL: Detected lcore 0 as core 0 on socket 0 00:06:02.187 EAL: Detected lcore 1 as core 1 on socket 0 00:06:02.187 EAL: Detected lcore 2 as core 2 on socket 0 00:06:02.187 EAL: Detected lcore 3 as core 3 on socket 0 00:06:02.187 EAL: Detected lcore 4 as core 4 on socket 0 00:06:02.187 EAL: Detected lcore 5 as core 5 on socket 0 00:06:02.187 EAL: Detected lcore 6 as core 6 on socket 0 00:06:02.187 EAL: Detected lcore 7 as core 8 on socket 0 00:06:02.187 EAL: Detected lcore 8 as core 9 on socket 0 00:06:02.187 EAL: Detected lcore 9 as core 10 on socket 0 00:06:02.187 EAL: Detected lcore 10 as core 11 on socket 0 00:06:02.188 EAL: Detected lcore 11 as core 12 on socket 0 00:06:02.188 EAL: Detected lcore 12 as core 13 on socket 0 00:06:02.188 EAL: Detected lcore 13 as core 14 on socket 0 00:06:02.188 EAL: Detected lcore 14 as core 16 on socket 0 00:06:02.188 EAL: Detected lcore 15 as core 17 on socket 0 00:06:02.188 EAL: Detected lcore 16 as core 18 on socket 0 00:06:02.188 EAL: Detected lcore 17 as core 19 on socket 0 00:06:02.188 EAL: Detected lcore 18 as core 20 on socket 0 00:06:02.188 EAL: Detected lcore 19 as core 21 on socket 0 00:06:02.188 EAL: Detected lcore 20 as core 22 on socket 0 00:06:02.188 EAL: Detected lcore 21 as core 24 on socket 0 00:06:02.188 EAL: Detected lcore 22 as core 25 on socket 0 00:06:02.188 EAL: Detected lcore 23 as core 26 on socket 0 00:06:02.188 EAL: Detected lcore 24 as core 27 on socket 0 00:06:02.188 EAL: Detected lcore 25 as core 28 on socket 0 00:06:02.188 EAL: Detected lcore 26 as core 29 on socket 0 00:06:02.188 EAL: Detected lcore 27 as core 30 on socket 0 00:06:02.188 EAL: Detected lcore 28 as core 0 on socket 1 00:06:02.188 EAL: Detected lcore 29 as core 1 on socket 1 00:06:02.188 EAL: Detected lcore 30 as core 2 on socket 1 00:06:02.188 EAL: Detected lcore 31 as core 3 on socket 1 00:06:02.188 EAL: Detected lcore 32 as core 4 on socket 1 00:06:02.188 EAL: Detected lcore 33 as core 5 on socket 1 00:06:02.188 EAL: Detected lcore 34 as core 6 on socket 1 00:06:02.188 EAL: Detected lcore 35 as core 8 on socket 1 00:06:02.188 EAL: Detected lcore 36 as core 9 on socket 1 00:06:02.188 EAL: Detected lcore 37 as core 10 on socket 1 00:06:02.188 EAL: Detected lcore 38 as core 11 on socket 1 00:06:02.188 EAL: Detected lcore 39 as core 12 on socket 1 00:06:02.188 EAL: Detected lcore 40 as core 13 on socket 1 00:06:02.188 EAL: Detected lcore 41 as core 14 on socket 1 00:06:02.188 EAL: Detected lcore 42 as core 16 on socket 1 00:06:02.188 EAL: Detected lcore 43 as core 17 on socket 1 00:06:02.188 EAL: Detected lcore 44 as core 18 on socket 1 00:06:02.188 EAL: Detected lcore 45 as core 19 on socket 1 00:06:02.188 EAL: Detected lcore 46 as core 20 on socket 1 00:06:02.188 EAL: Detected lcore 47 as core 21 on socket 1 00:06:02.188 EAL: Detected lcore 48 as core 22 on socket 1 00:06:02.188 EAL: Detected lcore 49 as core 24 on socket 1 00:06:02.188 EAL: Detected lcore 50 as core 25 on socket 1 00:06:02.188 EAL: Detected lcore 51 as core 26 on socket 1 00:06:02.188 EAL: Detected lcore 52 as core 27 on socket 1 00:06:02.188 EAL: Detected lcore 53 as core 28 on socket 1 00:06:02.188 EAL: Detected lcore 54 as core 29 on socket 1 00:06:02.188 EAL: Detected lcore 55 as core 30 on socket 1 00:06:02.188 EAL: Detected lcore 56 as core 0 on socket 0 00:06:02.188 EAL: Detected lcore 57 as core 1 on socket 0 00:06:02.188 EAL: Detected lcore 58 as core 2 on socket 0 00:06:02.188 EAL: Detected lcore 59 as core 3 on socket 0 00:06:02.188 EAL: Detected lcore 60 as core 4 on socket 0 00:06:02.188 EAL: Detected lcore 61 as core 5 on socket 0 00:06:02.188 EAL: Detected lcore 62 as core 6 on socket 0 00:06:02.188 EAL: Detected lcore 63 as core 8 on socket 0 00:06:02.188 EAL: Detected lcore 64 as core 9 on socket 0 00:06:02.188 EAL: Detected lcore 65 as core 10 on socket 0 00:06:02.188 EAL: Detected lcore 66 as core 11 on socket 0 00:06:02.188 EAL: Detected lcore 67 as core 12 on socket 0 00:06:02.188 EAL: Detected lcore 68 as core 13 on socket 0 00:06:02.188 EAL: Detected lcore 69 as core 14 on socket 0 00:06:02.188 EAL: Detected lcore 70 as core 16 on socket 0 00:06:02.188 EAL: Detected lcore 71 as core 17 on socket 0 00:06:02.188 EAL: Detected lcore 72 as core 18 on socket 0 00:06:02.188 EAL: Detected lcore 73 as core 19 on socket 0 00:06:02.188 EAL: Detected lcore 74 as core 20 on socket 0 00:06:02.188 EAL: Detected lcore 75 as core 21 on socket 0 00:06:02.188 EAL: Detected lcore 76 as core 22 on socket 0 00:06:02.188 EAL: Detected lcore 77 as core 24 on socket 0 00:06:02.188 EAL: Detected lcore 78 as core 25 on socket 0 00:06:02.188 EAL: Detected lcore 79 as core 26 on socket 0 00:06:02.188 EAL: Detected lcore 80 as core 27 on socket 0 00:06:02.188 EAL: Detected lcore 81 as core 28 on socket 0 00:06:02.188 EAL: Detected lcore 82 as core 29 on socket 0 00:06:02.188 EAL: Detected lcore 83 as core 30 on socket 0 00:06:02.188 EAL: Detected lcore 84 as core 0 on socket 1 00:06:02.188 EAL: Detected lcore 85 as core 1 on socket 1 00:06:02.188 EAL: Detected lcore 86 as core 2 on socket 1 00:06:02.188 EAL: Detected lcore 87 as core 3 on socket 1 00:06:02.188 EAL: Detected lcore 88 as core 4 on socket 1 00:06:02.188 EAL: Detected lcore 89 as core 5 on socket 1 00:06:02.188 EAL: Detected lcore 90 as core 6 on socket 1 00:06:02.188 EAL: Detected lcore 91 as core 8 on socket 1 00:06:02.188 EAL: Detected lcore 92 as core 9 on socket 1 00:06:02.188 EAL: Detected lcore 93 as core 10 on socket 1 00:06:02.188 EAL: Detected lcore 94 as core 11 on socket 1 00:06:02.188 EAL: Detected lcore 95 as core 12 on socket 1 00:06:02.188 EAL: Detected lcore 96 as core 13 on socket 1 00:06:02.188 EAL: Detected lcore 97 as core 14 on socket 1 00:06:02.188 EAL: Detected lcore 98 as core 16 on socket 1 00:06:02.188 EAL: Detected lcore 99 as core 17 on socket 1 00:06:02.188 EAL: Detected lcore 100 as core 18 on socket 1 00:06:02.188 EAL: Detected lcore 101 as core 19 on socket 1 00:06:02.188 EAL: Detected lcore 102 as core 20 on socket 1 00:06:02.188 EAL: Detected lcore 103 as core 21 on socket 1 00:06:02.188 EAL: Detected lcore 104 as core 22 on socket 1 00:06:02.188 EAL: Detected lcore 105 as core 24 on socket 1 00:06:02.188 EAL: Detected lcore 106 as core 25 on socket 1 00:06:02.188 EAL: Detected lcore 107 as core 26 on socket 1 00:06:02.188 EAL: Detected lcore 108 as core 27 on socket 1 00:06:02.188 EAL: Detected lcore 109 as core 28 on socket 1 00:06:02.188 EAL: Detected lcore 110 as core 29 on socket 1 00:06:02.188 EAL: Detected lcore 111 as core 30 on socket 1 00:06:02.188 EAL: Maximum logical cores by configuration: 128 00:06:02.188 EAL: Detected CPU lcores: 112 00:06:02.188 EAL: Detected NUMA nodes: 2 00:06:02.188 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:06:02.188 EAL: Detected shared linkage of DPDK 00:06:02.188 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:06:02.188 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:06:02.188 EAL: Registered [vdev] bus. 00:06:02.188 EAL: bus.vdev log level changed from disabled to notice 00:06:02.188 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:06:02.188 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:06:02.188 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:06:02.188 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:06:02.188 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:06:02.188 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:06:02.188 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:06:02.188 EAL: open shared lib /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:06:02.188 EAL: No shared files mode enabled, IPC will be disabled 00:06:02.188 EAL: No shared files mode enabled, IPC is disabled 00:06:02.188 EAL: Bus pci wants IOVA as 'DC' 00:06:02.188 EAL: Bus vdev wants IOVA as 'DC' 00:06:02.188 EAL: Buses did not request a specific IOVA mode. 00:06:02.188 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:02.188 EAL: Selected IOVA mode 'VA' 00:06:02.188 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.188 EAL: Probing VFIO support... 00:06:02.188 EAL: IOMMU type 1 (Type 1) is supported 00:06:02.188 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:02.188 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:02.188 EAL: VFIO support initialized 00:06:02.188 EAL: Ask a virtual area of 0x2e000 bytes 00:06:02.188 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:02.188 EAL: Setting up physically contiguous memory... 00:06:02.188 EAL: Setting maximum number of open files to 524288 00:06:02.188 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:02.188 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:02.188 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:02.188 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.188 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:02.189 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.189 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.189 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:02.189 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:02.189 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.189 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:02.189 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.189 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.189 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:02.189 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:02.189 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.189 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:02.189 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.189 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.189 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:02.189 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:02.189 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.189 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:02.189 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:02.189 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.189 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:02.189 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:02.189 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:02.189 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.189 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:02.189 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.189 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.189 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:02.189 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:02.189 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.189 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:02.189 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.189 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.189 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:02.189 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:02.189 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.189 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:02.189 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.189 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.189 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:02.189 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:02.189 EAL: Ask a virtual area of 0x61000 bytes 00:06:02.189 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:02.189 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:02.189 EAL: Ask a virtual area of 0x400000000 bytes 00:06:02.189 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:02.189 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:02.189 EAL: Hugepages will be freed exactly as allocated. 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: TSC frequency is ~2500000 KHz 00:06:02.189 EAL: Main lcore 0 is ready (tid=7f43caeaba00;cpuset=[0]) 00:06:02.189 EAL: Trying to obtain current memory policy. 00:06:02.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.189 EAL: Restoring previous memory policy: 0 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was expanded by 2MB 00:06:02.189 EAL: PCI device 0000:41:00.0 on NUMA socket 0 00:06:02.189 EAL: probe driver: 8086:37d2 net_i40e 00:06:02.189 EAL: Not managed by a supported kernel driver, skipped 00:06:02.189 EAL: PCI device 0000:41:00.1 on NUMA socket 0 00:06:02.189 EAL: probe driver: 8086:37d2 net_i40e 00:06:02.189 EAL: Not managed by a supported kernel driver, skipped 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:02.189 EAL: Mem event callback 'spdk:(nil)' registered 00:06:02.189 00:06:02.189 00:06:02.189 CUnit - A unit testing framework for C - Version 2.1-3 00:06:02.189 http://cunit.sourceforge.net/ 00:06:02.189 00:06:02.189 00:06:02.189 Suite: components_suite 00:06:02.189 Test: vtophys_malloc_test ...passed 00:06:02.189 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:02.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.189 EAL: Restoring previous memory policy: 4 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was expanded by 4MB 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was shrunk by 4MB 00:06:02.189 EAL: Trying to obtain current memory policy. 00:06:02.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.189 EAL: Restoring previous memory policy: 4 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was expanded by 6MB 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was shrunk by 6MB 00:06:02.189 EAL: Trying to obtain current memory policy. 00:06:02.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.189 EAL: Restoring previous memory policy: 4 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was expanded by 10MB 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was shrunk by 10MB 00:06:02.189 EAL: Trying to obtain current memory policy. 00:06:02.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.189 EAL: Restoring previous memory policy: 4 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was expanded by 18MB 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was shrunk by 18MB 00:06:02.189 EAL: Trying to obtain current memory policy. 00:06:02.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.189 EAL: Restoring previous memory policy: 4 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was expanded by 34MB 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was shrunk by 34MB 00:06:02.189 EAL: Trying to obtain current memory policy. 00:06:02.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.189 EAL: Restoring previous memory policy: 4 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was expanded by 66MB 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was shrunk by 66MB 00:06:02.189 EAL: Trying to obtain current memory policy. 00:06:02.189 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.189 EAL: Restoring previous memory policy: 4 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was expanded by 130MB 00:06:02.189 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.189 EAL: request: mp_malloc_sync 00:06:02.189 EAL: No shared files mode enabled, IPC is disabled 00:06:02.189 EAL: Heap on socket 0 was shrunk by 130MB 00:06:02.190 EAL: Trying to obtain current memory policy. 00:06:02.190 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.190 EAL: Restoring previous memory policy: 4 00:06:02.190 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.190 EAL: request: mp_malloc_sync 00:06:02.190 EAL: No shared files mode enabled, IPC is disabled 00:06:02.190 EAL: Heap on socket 0 was expanded by 258MB 00:06:02.449 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.449 EAL: request: mp_malloc_sync 00:06:02.449 EAL: No shared files mode enabled, IPC is disabled 00:06:02.449 EAL: Heap on socket 0 was shrunk by 258MB 00:06:02.449 EAL: Trying to obtain current memory policy. 00:06:02.449 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.449 EAL: Restoring previous memory policy: 4 00:06:02.449 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.449 EAL: request: mp_malloc_sync 00:06:02.449 EAL: No shared files mode enabled, IPC is disabled 00:06:02.449 EAL: Heap on socket 0 was expanded by 514MB 00:06:02.449 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.708 EAL: request: mp_malloc_sync 00:06:02.708 EAL: No shared files mode enabled, IPC is disabled 00:06:02.708 EAL: Heap on socket 0 was shrunk by 514MB 00:06:02.708 EAL: Trying to obtain current memory policy. 00:06:02.708 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:02.967 EAL: Restoring previous memory policy: 4 00:06:02.967 EAL: Calling mem event callback 'spdk:(nil)' 00:06:02.967 EAL: request: mp_malloc_sync 00:06:02.968 EAL: No shared files mode enabled, IPC is disabled 00:06:02.968 EAL: Heap on socket 0 was expanded by 1026MB 00:06:02.968 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.227 EAL: request: mp_malloc_sync 00:06:03.228 EAL: No shared files mode enabled, IPC is disabled 00:06:03.228 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:03.228 passed 00:06:03.228 00:06:03.228 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.228 suites 1 1 n/a 0 0 00:06:03.228 tests 2 2 2 0 0 00:06:03.228 asserts 497 497 497 0 n/a 00:06:03.228 00:06:03.228 Elapsed time = 0.969 seconds 00:06:03.228 EAL: Calling mem event callback 'spdk:(nil)' 00:06:03.228 EAL: request: mp_malloc_sync 00:06:03.228 EAL: No shared files mode enabled, IPC is disabled 00:06:03.228 EAL: Heap on socket 0 was shrunk by 2MB 00:06:03.228 EAL: No shared files mode enabled, IPC is disabled 00:06:03.228 EAL: No shared files mode enabled, IPC is disabled 00:06:03.228 EAL: No shared files mode enabled, IPC is disabled 00:06:03.228 00:06:03.228 real 0m1.121s 00:06:03.228 user 0m0.639s 00:06:03.228 sys 0m0.449s 00:06:03.228 15:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.228 15:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 ************************************ 00:06:03.228 END TEST env_vtophys 00:06:03.228 ************************************ 00:06:03.228 15:56:33 -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.228 15:56:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:03.228 15:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.228 15:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 ************************************ 00:06:03.228 START TEST env_pci 00:06:03.228 ************************************ 00:06:03.228 15:56:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/pci/pci_ut 00:06:03.228 00:06:03.228 00:06:03.228 CUnit - A unit testing framework for C - Version 2.1-3 00:06:03.228 http://cunit.sourceforge.net/ 00:06:03.228 00:06:03.228 00:06:03.228 Suite: pci 00:06:03.228 Test: pci_hook ...[2024-11-20 15:56:33.908590] /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1175465 has claimed it 00:06:03.228 EAL: Cannot find device (10000:00:01.0) 00:06:03.228 EAL: Failed to attach device on primary process 00:06:03.228 passed 00:06:03.228 00:06:03.228 Run Summary: Type Total Ran Passed Failed Inactive 00:06:03.228 suites 1 1 n/a 0 0 00:06:03.228 tests 1 1 1 0 0 00:06:03.228 asserts 25 25 25 0 n/a 00:06:03.228 00:06:03.228 Elapsed time = 0.034 seconds 00:06:03.228 00:06:03.228 real 0m0.054s 00:06:03.228 user 0m0.018s 00:06:03.228 sys 0m0.036s 00:06:03.228 15:56:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:03.228 15:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 ************************************ 00:06:03.228 END TEST env_pci 00:06:03.228 ************************************ 00:06:03.228 15:56:33 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:03.228 15:56:33 -- env/env.sh@15 -- # uname 00:06:03.228 15:56:33 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:03.228 15:56:33 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:03.228 15:56:33 -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.228 15:56:33 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:06:03.228 15:56:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:03.228 15:56:33 -- common/autotest_common.sh@10 -- # set +x 00:06:03.228 ************************************ 00:06:03.228 START TEST env_dpdk_post_init 00:06:03.228 ************************************ 00:06:03.228 15:56:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:03.228 EAL: Detected CPU lcores: 112 00:06:03.228 EAL: Detected NUMA nodes: 2 00:06:03.228 EAL: Detected shared linkage of DPDK 00:06:03.488 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:03.488 EAL: Selected IOVA mode 'VA' 00:06:03.488 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.488 EAL: VFIO support initialized 00:06:03.488 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:03.488 EAL: Using IOMMU type 1 (Type 1) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:03.488 EAL: Ignore mapping IO port bar(1) 00:06:03.488 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:03.748 EAL: Ignore mapping IO port bar(1) 00:06:03.748 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:03.748 EAL: Ignore mapping IO port bar(1) 00:06:03.748 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:03.748 EAL: Ignore mapping IO port bar(1) 00:06:03.748 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:03.748 EAL: Ignore mapping IO port bar(1) 00:06:03.748 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:04.317 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:d8:00.0 (socket 1) 00:06:08.513 EAL: Releasing PCI mapped resource for 0000:d8:00.0 00:06:08.513 EAL: Calling pci_unmap_resource for 0000:d8:00.0 at 0x202001040000 00:06:08.773 Starting DPDK initialization... 00:06:08.773 Starting SPDK post initialization... 00:06:08.773 SPDK NVMe probe 00:06:08.773 Attaching to 0000:d8:00.0 00:06:08.773 Attached to 0000:d8:00.0 00:06:08.773 Cleaning up... 00:06:08.773 00:06:08.773 real 0m5.382s 00:06:08.773 user 0m3.992s 00:06:08.773 sys 0m0.435s 00:06:08.773 15:56:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.773 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 ************************************ 00:06:08.773 END TEST env_dpdk_post_init 00:06:08.773 ************************************ 00:06:08.773 15:56:39 -- env/env.sh@26 -- # uname 00:06:08.773 15:56:39 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:08.773 15:56:39 -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.773 15:56:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:08.773 15:56:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:08.773 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.773 ************************************ 00:06:08.773 START TEST env_mem_callbacks 00:06:08.773 ************************************ 00:06:08.773 15:56:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:08.773 EAL: Detected CPU lcores: 112 00:06:08.773 EAL: Detected NUMA nodes: 2 00:06:08.773 EAL: Detected shared linkage of DPDK 00:06:08.773 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:08.773 EAL: Selected IOVA mode 'VA' 00:06:08.773 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.773 EAL: VFIO support initialized 00:06:08.773 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:08.773 00:06:08.774 00:06:08.774 CUnit - A unit testing framework for C - Version 2.1-3 00:06:08.774 http://cunit.sourceforge.net/ 00:06:08.774 00:06:08.774 00:06:08.774 Suite: memory 00:06:08.774 Test: test ... 00:06:08.774 register 0x200000200000 2097152 00:06:08.774 malloc 3145728 00:06:08.774 register 0x200000400000 4194304 00:06:08.774 buf 0x200000500000 len 3145728 PASSED 00:06:08.774 malloc 64 00:06:08.774 buf 0x2000004fff40 len 64 PASSED 00:06:08.774 malloc 4194304 00:06:08.774 register 0x200000800000 6291456 00:06:08.774 buf 0x200000a00000 len 4194304 PASSED 00:06:08.774 free 0x200000500000 3145728 00:06:08.774 free 0x2000004fff40 64 00:06:08.774 unregister 0x200000400000 4194304 PASSED 00:06:08.774 free 0x200000a00000 4194304 00:06:08.774 unregister 0x200000800000 6291456 PASSED 00:06:08.774 malloc 8388608 00:06:08.774 register 0x200000400000 10485760 00:06:08.774 buf 0x200000600000 len 8388608 PASSED 00:06:08.774 free 0x200000600000 8388608 00:06:08.774 unregister 0x200000400000 10485760 PASSED 00:06:08.774 passed 00:06:08.774 00:06:08.774 Run Summary: Type Total Ran Passed Failed Inactive 00:06:08.774 suites 1 1 n/a 0 0 00:06:08.774 tests 1 1 1 0 0 00:06:08.774 asserts 15 15 15 0 n/a 00:06:08.774 00:06:08.774 Elapsed time = 0.008 seconds 00:06:08.774 00:06:08.774 real 0m0.069s 00:06:08.774 user 0m0.028s 00:06:08.774 sys 0m0.041s 00:06:08.774 15:56:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.774 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 ************************************ 00:06:08.774 END TEST env_mem_callbacks 00:06:08.774 ************************************ 00:06:08.774 00:06:08.774 real 0m7.204s 00:06:08.774 user 0m4.992s 00:06:08.774 sys 0m1.284s 00:06:08.774 15:56:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:08.774 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:06:08.774 ************************************ 00:06:08.774 END TEST env 00:06:08.774 ************************************ 00:06:09.033 15:56:39 -- spdk/autotest.sh@163 -- # run_test rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:09.033 15:56:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.033 15:56:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.033 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.033 ************************************ 00:06:09.033 START TEST rpc 00:06:09.033 ************************************ 00:06:09.033 15:56:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc/rpc.sh 00:06:09.033 * Looking for test storage... 00:06:09.033 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:09.033 15:56:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:09.033 15:56:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:09.033 15:56:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:09.033 15:56:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:09.033 15:56:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:09.033 15:56:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:09.033 15:56:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:09.033 15:56:39 -- scripts/common.sh@335 -- # IFS=.-: 00:06:09.033 15:56:39 -- scripts/common.sh@335 -- # read -ra ver1 00:06:09.033 15:56:39 -- scripts/common.sh@336 -- # IFS=.-: 00:06:09.033 15:56:39 -- scripts/common.sh@336 -- # read -ra ver2 00:06:09.033 15:56:39 -- scripts/common.sh@337 -- # local 'op=<' 00:06:09.033 15:56:39 -- scripts/common.sh@339 -- # ver1_l=2 00:06:09.033 15:56:39 -- scripts/common.sh@340 -- # ver2_l=1 00:06:09.033 15:56:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:09.033 15:56:39 -- scripts/common.sh@343 -- # case "$op" in 00:06:09.033 15:56:39 -- scripts/common.sh@344 -- # : 1 00:06:09.033 15:56:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:09.033 15:56:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:09.033 15:56:39 -- scripts/common.sh@364 -- # decimal 1 00:06:09.033 15:56:39 -- scripts/common.sh@352 -- # local d=1 00:06:09.033 15:56:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:09.033 15:56:39 -- scripts/common.sh@354 -- # echo 1 00:06:09.033 15:56:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:09.033 15:56:39 -- scripts/common.sh@365 -- # decimal 2 00:06:09.033 15:56:39 -- scripts/common.sh@352 -- # local d=2 00:06:09.033 15:56:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:09.033 15:56:39 -- scripts/common.sh@354 -- # echo 2 00:06:09.033 15:56:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:09.033 15:56:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:09.033 15:56:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:09.033 15:56:39 -- scripts/common.sh@367 -- # return 0 00:06:09.033 15:56:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:09.033 15:56:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:09.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.033 --rc genhtml_branch_coverage=1 00:06:09.034 --rc genhtml_function_coverage=1 00:06:09.034 --rc genhtml_legend=1 00:06:09.034 --rc geninfo_all_blocks=1 00:06:09.034 --rc geninfo_unexecuted_blocks=1 00:06:09.034 00:06:09.034 ' 00:06:09.034 15:56:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:09.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.034 --rc genhtml_branch_coverage=1 00:06:09.034 --rc genhtml_function_coverage=1 00:06:09.034 --rc genhtml_legend=1 00:06:09.034 --rc geninfo_all_blocks=1 00:06:09.034 --rc geninfo_unexecuted_blocks=1 00:06:09.034 00:06:09.034 ' 00:06:09.034 15:56:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:09.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.034 --rc genhtml_branch_coverage=1 00:06:09.034 --rc genhtml_function_coverage=1 00:06:09.034 --rc genhtml_legend=1 00:06:09.034 --rc geninfo_all_blocks=1 00:06:09.034 --rc geninfo_unexecuted_blocks=1 00:06:09.034 00:06:09.034 ' 00:06:09.034 15:56:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:09.034 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:09.034 --rc genhtml_branch_coverage=1 00:06:09.034 --rc genhtml_function_coverage=1 00:06:09.034 --rc genhtml_legend=1 00:06:09.034 --rc geninfo_all_blocks=1 00:06:09.034 --rc geninfo_unexecuted_blocks=1 00:06:09.034 00:06:09.034 ' 00:06:09.034 15:56:39 -- rpc/rpc.sh@65 -- # spdk_pid=1176636 00:06:09.034 15:56:39 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:09.034 15:56:39 -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:09.034 15:56:39 -- rpc/rpc.sh@67 -- # waitforlisten 1176636 00:06:09.034 15:56:39 -- common/autotest_common.sh@829 -- # '[' -z 1176636 ']' 00:06:09.034 15:56:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:09.034 15:56:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:09.034 15:56:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:09.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:09.034 15:56:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:09.034 15:56:39 -- common/autotest_common.sh@10 -- # set +x 00:06:09.293 [2024-11-20 15:56:39.845280] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:09.293 [2024-11-20 15:56:39.845337] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1176636 ] 00:06:09.293 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.293 [2024-11-20 15:56:39.929328] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.293 [2024-11-20 15:56:39.967097] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:09.293 [2024-11-20 15:56:39.967206] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:09.293 [2024-11-20 15:56:39.967217] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1176636' to capture a snapshot of events at runtime. 00:06:09.293 [2024-11-20 15:56:39.967226] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1176636 for offline analysis/debug. 00:06:09.293 [2024-11-20 15:56:39.967254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.861 15:56:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.861 15:56:40 -- common/autotest_common.sh@862 -- # return 0 00:06:09.861 15:56:40 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:09.861 15:56:40 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc 00:06:09.861 15:56:40 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:09.861 15:56:40 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:09.861 15:56:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:09.861 15:56:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:09.861 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:09.861 ************************************ 00:06:09.861 START TEST rpc_integrity 00:06:09.861 ************************************ 00:06:09.861 15:56:40 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:09.861 15:56:40 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:09.861 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:09.861 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.121 15:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.121 15:56:40 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:10.121 15:56:40 -- rpc/rpc.sh@13 -- # jq length 00:06:10.121 15:56:40 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:10.121 15:56:40 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:10.121 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.121 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.121 15:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.121 15:56:40 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:10.121 15:56:40 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:10.121 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.121 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.121 15:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.121 15:56:40 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:10.121 { 00:06:10.121 "name": "Malloc0", 00:06:10.121 "aliases": [ 00:06:10.121 "4bd5562d-92aa-47d8-9c92-8f7ace813717" 00:06:10.121 ], 00:06:10.121 "product_name": "Malloc disk", 00:06:10.121 "block_size": 512, 00:06:10.121 "num_blocks": 16384, 00:06:10.121 "uuid": "4bd5562d-92aa-47d8-9c92-8f7ace813717", 00:06:10.121 "assigned_rate_limits": { 00:06:10.121 "rw_ios_per_sec": 0, 00:06:10.121 "rw_mbytes_per_sec": 0, 00:06:10.121 "r_mbytes_per_sec": 0, 00:06:10.121 "w_mbytes_per_sec": 0 00:06:10.121 }, 00:06:10.121 "claimed": false, 00:06:10.121 "zoned": false, 00:06:10.121 "supported_io_types": { 00:06:10.121 "read": true, 00:06:10.121 "write": true, 00:06:10.121 "unmap": true, 00:06:10.121 "write_zeroes": true, 00:06:10.121 "flush": true, 00:06:10.121 "reset": true, 00:06:10.121 "compare": false, 00:06:10.121 "compare_and_write": false, 00:06:10.121 "abort": true, 00:06:10.121 "nvme_admin": false, 00:06:10.121 "nvme_io": false 00:06:10.121 }, 00:06:10.121 "memory_domains": [ 00:06:10.121 { 00:06:10.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.121 "dma_device_type": 2 00:06:10.121 } 00:06:10.121 ], 00:06:10.121 "driver_specific": {} 00:06:10.121 } 00:06:10.121 ]' 00:06:10.121 15:56:40 -- rpc/rpc.sh@17 -- # jq length 00:06:10.121 15:56:40 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:10.121 15:56:40 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:10.121 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.121 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.121 [2024-11-20 15:56:40.771214] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:10.121 [2024-11-20 15:56:40.771245] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.121 [2024-11-20 15:56:40.771260] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23f2280 00:06:10.121 [2024-11-20 15:56:40.771268] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.121 [2024-11-20 15:56:40.772263] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.121 [2024-11-20 15:56:40.772285] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:10.121 Passthru0 00:06:10.121 15:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.121 15:56:40 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:10.121 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.121 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.121 15:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.121 15:56:40 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:10.121 { 00:06:10.121 "name": "Malloc0", 00:06:10.121 "aliases": [ 00:06:10.121 "4bd5562d-92aa-47d8-9c92-8f7ace813717" 00:06:10.121 ], 00:06:10.121 "product_name": "Malloc disk", 00:06:10.121 "block_size": 512, 00:06:10.121 "num_blocks": 16384, 00:06:10.121 "uuid": "4bd5562d-92aa-47d8-9c92-8f7ace813717", 00:06:10.121 "assigned_rate_limits": { 00:06:10.121 "rw_ios_per_sec": 0, 00:06:10.121 "rw_mbytes_per_sec": 0, 00:06:10.121 "r_mbytes_per_sec": 0, 00:06:10.121 "w_mbytes_per_sec": 0 00:06:10.121 }, 00:06:10.121 "claimed": true, 00:06:10.121 "claim_type": "exclusive_write", 00:06:10.121 "zoned": false, 00:06:10.121 "supported_io_types": { 00:06:10.121 "read": true, 00:06:10.121 "write": true, 00:06:10.121 "unmap": true, 00:06:10.121 "write_zeroes": true, 00:06:10.121 "flush": true, 00:06:10.121 "reset": true, 00:06:10.121 "compare": false, 00:06:10.121 "compare_and_write": false, 00:06:10.121 "abort": true, 00:06:10.121 "nvme_admin": false, 00:06:10.121 "nvme_io": false 00:06:10.121 }, 00:06:10.121 "memory_domains": [ 00:06:10.121 { 00:06:10.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.121 "dma_device_type": 2 00:06:10.121 } 00:06:10.121 ], 00:06:10.121 "driver_specific": {} 00:06:10.121 }, 00:06:10.121 { 00:06:10.121 "name": "Passthru0", 00:06:10.121 "aliases": [ 00:06:10.121 "3fed2a52-1f05-5ed5-b20e-a13cce67d386" 00:06:10.121 ], 00:06:10.121 "product_name": "passthru", 00:06:10.121 "block_size": 512, 00:06:10.121 "num_blocks": 16384, 00:06:10.121 "uuid": "3fed2a52-1f05-5ed5-b20e-a13cce67d386", 00:06:10.121 "assigned_rate_limits": { 00:06:10.121 "rw_ios_per_sec": 0, 00:06:10.121 "rw_mbytes_per_sec": 0, 00:06:10.121 "r_mbytes_per_sec": 0, 00:06:10.121 "w_mbytes_per_sec": 0 00:06:10.121 }, 00:06:10.121 "claimed": false, 00:06:10.121 "zoned": false, 00:06:10.121 "supported_io_types": { 00:06:10.121 "read": true, 00:06:10.121 "write": true, 00:06:10.121 "unmap": true, 00:06:10.121 "write_zeroes": true, 00:06:10.121 "flush": true, 00:06:10.122 "reset": true, 00:06:10.122 "compare": false, 00:06:10.122 "compare_and_write": false, 00:06:10.122 "abort": true, 00:06:10.122 "nvme_admin": false, 00:06:10.122 "nvme_io": false 00:06:10.122 }, 00:06:10.122 "memory_domains": [ 00:06:10.122 { 00:06:10.122 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.122 "dma_device_type": 2 00:06:10.122 } 00:06:10.122 ], 00:06:10.122 "driver_specific": { 00:06:10.122 "passthru": { 00:06:10.122 "name": "Passthru0", 00:06:10.122 "base_bdev_name": "Malloc0" 00:06:10.122 } 00:06:10.122 } 00:06:10.122 } 00:06:10.122 ]' 00:06:10.122 15:56:40 -- rpc/rpc.sh@21 -- # jq length 00:06:10.122 15:56:40 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:10.122 15:56:40 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:10.122 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.122 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.122 15:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.122 15:56:40 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:10.122 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.122 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.122 15:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.122 15:56:40 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:10.122 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.122 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.122 15:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.122 15:56:40 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:10.122 15:56:40 -- rpc/rpc.sh@26 -- # jq length 00:06:10.122 15:56:40 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:10.122 00:06:10.122 real 0m0.262s 00:06:10.122 user 0m0.156s 00:06:10.122 sys 0m0.044s 00:06:10.122 15:56:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.122 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.122 ************************************ 00:06:10.122 END TEST rpc_integrity 00:06:10.122 ************************************ 00:06:10.381 15:56:40 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:10.382 15:56:40 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.382 15:56:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.382 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.382 ************************************ 00:06:10.382 START TEST rpc_plugins 00:06:10.382 ************************************ 00:06:10.382 15:56:40 -- common/autotest_common.sh@1114 -- # rpc_plugins 00:06:10.382 15:56:40 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:10.382 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.382 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.382 15:56:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.382 15:56:40 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:10.382 15:56:40 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:10.382 15:56:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.382 15:56:40 -- common/autotest_common.sh@10 -- # set +x 00:06:10.382 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.382 15:56:41 -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:10.382 { 00:06:10.382 "name": "Malloc1", 00:06:10.382 "aliases": [ 00:06:10.382 "26d0d427-4bd9-4723-8222-5b4ec103ceba" 00:06:10.382 ], 00:06:10.382 "product_name": "Malloc disk", 00:06:10.382 "block_size": 4096, 00:06:10.382 "num_blocks": 256, 00:06:10.382 "uuid": "26d0d427-4bd9-4723-8222-5b4ec103ceba", 00:06:10.382 "assigned_rate_limits": { 00:06:10.382 "rw_ios_per_sec": 0, 00:06:10.382 "rw_mbytes_per_sec": 0, 00:06:10.382 "r_mbytes_per_sec": 0, 00:06:10.382 "w_mbytes_per_sec": 0 00:06:10.382 }, 00:06:10.382 "claimed": false, 00:06:10.382 "zoned": false, 00:06:10.382 "supported_io_types": { 00:06:10.382 "read": true, 00:06:10.382 "write": true, 00:06:10.382 "unmap": true, 00:06:10.382 "write_zeroes": true, 00:06:10.382 "flush": true, 00:06:10.382 "reset": true, 00:06:10.382 "compare": false, 00:06:10.382 "compare_and_write": false, 00:06:10.382 "abort": true, 00:06:10.382 "nvme_admin": false, 00:06:10.382 "nvme_io": false 00:06:10.382 }, 00:06:10.382 "memory_domains": [ 00:06:10.382 { 00:06:10.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.382 "dma_device_type": 2 00:06:10.382 } 00:06:10.382 ], 00:06:10.382 "driver_specific": {} 00:06:10.382 } 00:06:10.382 ]' 00:06:10.382 15:56:41 -- rpc/rpc.sh@32 -- # jq length 00:06:10.382 15:56:41 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:10.382 15:56:41 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:10.382 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.382 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.382 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.382 15:56:41 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:10.382 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.382 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.382 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.382 15:56:41 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:10.382 15:56:41 -- rpc/rpc.sh@36 -- # jq length 00:06:10.382 15:56:41 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:10.382 00:06:10.382 real 0m0.145s 00:06:10.382 user 0m0.088s 00:06:10.382 sys 0m0.025s 00:06:10.382 15:56:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.382 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.382 ************************************ 00:06:10.382 END TEST rpc_plugins 00:06:10.382 ************************************ 00:06:10.382 15:56:41 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:10.382 15:56:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.382 15:56:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.382 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.382 ************************************ 00:06:10.382 START TEST rpc_trace_cmd_test 00:06:10.382 ************************************ 00:06:10.382 15:56:41 -- common/autotest_common.sh@1114 -- # rpc_trace_cmd_test 00:06:10.382 15:56:41 -- rpc/rpc.sh@40 -- # local info 00:06:10.382 15:56:41 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:10.382 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.382 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.382 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.382 15:56:41 -- rpc/rpc.sh@42 -- # info='{ 00:06:10.382 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1176636", 00:06:10.382 "tpoint_group_mask": "0x8", 00:06:10.382 "iscsi_conn": { 00:06:10.382 "mask": "0x2", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "scsi": { 00:06:10.382 "mask": "0x4", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "bdev": { 00:06:10.382 "mask": "0x8", 00:06:10.382 "tpoint_mask": "0xffffffffffffffff" 00:06:10.382 }, 00:06:10.382 "nvmf_rdma": { 00:06:10.382 "mask": "0x10", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "nvmf_tcp": { 00:06:10.382 "mask": "0x20", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "ftl": { 00:06:10.382 "mask": "0x40", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "blobfs": { 00:06:10.382 "mask": "0x80", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "dsa": { 00:06:10.382 "mask": "0x200", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "thread": { 00:06:10.382 "mask": "0x400", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "nvme_pcie": { 00:06:10.382 "mask": "0x800", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "iaa": { 00:06:10.382 "mask": "0x1000", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "nvme_tcp": { 00:06:10.382 "mask": "0x2000", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 }, 00:06:10.382 "bdev_nvme": { 00:06:10.382 "mask": "0x4000", 00:06:10.382 "tpoint_mask": "0x0" 00:06:10.382 } 00:06:10.382 }' 00:06:10.382 15:56:41 -- rpc/rpc.sh@43 -- # jq length 00:06:10.641 15:56:41 -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:06:10.641 15:56:41 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:10.641 15:56:41 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:10.641 15:56:41 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:10.641 15:56:41 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:10.641 15:56:41 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:10.641 15:56:41 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:10.641 15:56:41 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:10.641 15:56:41 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:10.641 00:06:10.642 real 0m0.224s 00:06:10.642 user 0m0.178s 00:06:10.642 sys 0m0.039s 00:06:10.642 15:56:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.642 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.642 ************************************ 00:06:10.642 END TEST rpc_trace_cmd_test 00:06:10.642 ************************************ 00:06:10.642 15:56:41 -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:10.642 15:56:41 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:10.642 15:56:41 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:10.642 15:56:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:10.642 15:56:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:10.642 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.642 ************************************ 00:06:10.642 START TEST rpc_daemon_integrity 00:06:10.642 ************************************ 00:06:10.642 15:56:41 -- common/autotest_common.sh@1114 -- # rpc_integrity 00:06:10.642 15:56:41 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:10.642 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.642 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.642 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.642 15:56:41 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:10.642 15:56:41 -- rpc/rpc.sh@13 -- # jq length 00:06:10.901 15:56:41 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:10.901 15:56:41 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:10.901 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.901 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.901 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.901 15:56:41 -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:10.901 15:56:41 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:10.901 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.901 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.901 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.901 15:56:41 -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:10.901 { 00:06:10.901 "name": "Malloc2", 00:06:10.901 "aliases": [ 00:06:10.901 "79892095-8a27-4f39-ba4a-df7faba40cda" 00:06:10.901 ], 00:06:10.901 "product_name": "Malloc disk", 00:06:10.901 "block_size": 512, 00:06:10.901 "num_blocks": 16384, 00:06:10.901 "uuid": "79892095-8a27-4f39-ba4a-df7faba40cda", 00:06:10.901 "assigned_rate_limits": { 00:06:10.901 "rw_ios_per_sec": 0, 00:06:10.901 "rw_mbytes_per_sec": 0, 00:06:10.901 "r_mbytes_per_sec": 0, 00:06:10.901 "w_mbytes_per_sec": 0 00:06:10.901 }, 00:06:10.901 "claimed": false, 00:06:10.901 "zoned": false, 00:06:10.901 "supported_io_types": { 00:06:10.901 "read": true, 00:06:10.901 "write": true, 00:06:10.901 "unmap": true, 00:06:10.901 "write_zeroes": true, 00:06:10.901 "flush": true, 00:06:10.901 "reset": true, 00:06:10.901 "compare": false, 00:06:10.901 "compare_and_write": false, 00:06:10.901 "abort": true, 00:06:10.901 "nvme_admin": false, 00:06:10.901 "nvme_io": false 00:06:10.901 }, 00:06:10.901 "memory_domains": [ 00:06:10.901 { 00:06:10.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.902 "dma_device_type": 2 00:06:10.902 } 00:06:10.902 ], 00:06:10.902 "driver_specific": {} 00:06:10.902 } 00:06:10.902 ]' 00:06:10.902 15:56:41 -- rpc/rpc.sh@17 -- # jq length 00:06:10.902 15:56:41 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:10.902 15:56:41 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:10.902 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.902 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.902 [2024-11-20 15:56:41.561361] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:10.902 [2024-11-20 15:56:41.561391] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.902 [2024-11-20 15:56:41.561407] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x23f5a20 00:06:10.902 [2024-11-20 15:56:41.561416] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.902 [2024-11-20 15:56:41.562298] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.902 [2024-11-20 15:56:41.562319] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:10.902 Passthru0 00:06:10.902 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.902 15:56:41 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:10.902 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.902 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.902 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.902 15:56:41 -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:10.902 { 00:06:10.902 "name": "Malloc2", 00:06:10.902 "aliases": [ 00:06:10.902 "79892095-8a27-4f39-ba4a-df7faba40cda" 00:06:10.902 ], 00:06:10.902 "product_name": "Malloc disk", 00:06:10.902 "block_size": 512, 00:06:10.902 "num_blocks": 16384, 00:06:10.902 "uuid": "79892095-8a27-4f39-ba4a-df7faba40cda", 00:06:10.902 "assigned_rate_limits": { 00:06:10.902 "rw_ios_per_sec": 0, 00:06:10.902 "rw_mbytes_per_sec": 0, 00:06:10.902 "r_mbytes_per_sec": 0, 00:06:10.902 "w_mbytes_per_sec": 0 00:06:10.902 }, 00:06:10.902 "claimed": true, 00:06:10.902 "claim_type": "exclusive_write", 00:06:10.902 "zoned": false, 00:06:10.902 "supported_io_types": { 00:06:10.902 "read": true, 00:06:10.902 "write": true, 00:06:10.902 "unmap": true, 00:06:10.902 "write_zeroes": true, 00:06:10.902 "flush": true, 00:06:10.902 "reset": true, 00:06:10.902 "compare": false, 00:06:10.902 "compare_and_write": false, 00:06:10.902 "abort": true, 00:06:10.902 "nvme_admin": false, 00:06:10.902 "nvme_io": false 00:06:10.902 }, 00:06:10.902 "memory_domains": [ 00:06:10.902 { 00:06:10.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.902 "dma_device_type": 2 00:06:10.902 } 00:06:10.902 ], 00:06:10.902 "driver_specific": {} 00:06:10.902 }, 00:06:10.902 { 00:06:10.902 "name": "Passthru0", 00:06:10.902 "aliases": [ 00:06:10.902 "6e9b31f2-b3ca-5be5-afce-b1f8dcfd2d74" 00:06:10.902 ], 00:06:10.902 "product_name": "passthru", 00:06:10.902 "block_size": 512, 00:06:10.902 "num_blocks": 16384, 00:06:10.902 "uuid": "6e9b31f2-b3ca-5be5-afce-b1f8dcfd2d74", 00:06:10.902 "assigned_rate_limits": { 00:06:10.902 "rw_ios_per_sec": 0, 00:06:10.902 "rw_mbytes_per_sec": 0, 00:06:10.902 "r_mbytes_per_sec": 0, 00:06:10.902 "w_mbytes_per_sec": 0 00:06:10.902 }, 00:06:10.902 "claimed": false, 00:06:10.902 "zoned": false, 00:06:10.902 "supported_io_types": { 00:06:10.902 "read": true, 00:06:10.902 "write": true, 00:06:10.902 "unmap": true, 00:06:10.902 "write_zeroes": true, 00:06:10.902 "flush": true, 00:06:10.902 "reset": true, 00:06:10.902 "compare": false, 00:06:10.902 "compare_and_write": false, 00:06:10.902 "abort": true, 00:06:10.902 "nvme_admin": false, 00:06:10.902 "nvme_io": false 00:06:10.902 }, 00:06:10.902 "memory_domains": [ 00:06:10.902 { 00:06:10.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:10.902 "dma_device_type": 2 00:06:10.902 } 00:06:10.902 ], 00:06:10.902 "driver_specific": { 00:06:10.902 "passthru": { 00:06:10.902 "name": "Passthru0", 00:06:10.902 "base_bdev_name": "Malloc2" 00:06:10.902 } 00:06:10.902 } 00:06:10.902 } 00:06:10.902 ]' 00:06:10.902 15:56:41 -- rpc/rpc.sh@21 -- # jq length 00:06:10.902 15:56:41 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:10.902 15:56:41 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:10.902 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.902 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.902 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.902 15:56:41 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:10.902 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.902 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.902 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.902 15:56:41 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:10.902 15:56:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:10.902 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.902 15:56:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:10.902 15:56:41 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:10.902 15:56:41 -- rpc/rpc.sh@26 -- # jq length 00:06:10.902 15:56:41 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:10.902 00:06:10.902 real 0m0.267s 00:06:10.902 user 0m0.167s 00:06:10.902 sys 0m0.039s 00:06:10.902 15:56:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:10.902 15:56:41 -- common/autotest_common.sh@10 -- # set +x 00:06:10.902 ************************************ 00:06:10.902 END TEST rpc_daemon_integrity 00:06:10.902 ************************************ 00:06:11.162 15:56:41 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:11.162 15:56:41 -- rpc/rpc.sh@84 -- # killprocess 1176636 00:06:11.162 15:56:41 -- common/autotest_common.sh@936 -- # '[' -z 1176636 ']' 00:06:11.162 15:56:41 -- common/autotest_common.sh@940 -- # kill -0 1176636 00:06:11.162 15:56:41 -- common/autotest_common.sh@941 -- # uname 00:06:11.162 15:56:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:11.162 15:56:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1176636 00:06:11.162 15:56:41 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:11.162 15:56:41 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:11.162 15:56:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1176636' 00:06:11.162 killing process with pid 1176636 00:06:11.162 15:56:41 -- common/autotest_common.sh@955 -- # kill 1176636 00:06:11.162 15:56:41 -- common/autotest_common.sh@960 -- # wait 1176636 00:06:11.422 00:06:11.422 real 0m2.486s 00:06:11.422 user 0m3.068s 00:06:11.422 sys 0m0.781s 00:06:11.422 15:56:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.422 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.422 ************************************ 00:06:11.422 END TEST rpc 00:06:11.422 ************************************ 00:06:11.422 15:56:42 -- spdk/autotest.sh@164 -- # run_test rpc_client /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.422 15:56:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.422 15:56:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.422 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.422 ************************************ 00:06:11.422 START TEST rpc_client 00:06:11.422 ************************************ 00:06:11.422 15:56:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:11.681 * Looking for test storage... 00:06:11.681 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client 00:06:11.681 15:56:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:11.681 15:56:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:11.681 15:56:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:11.681 15:56:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:11.681 15:56:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:11.681 15:56:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:11.681 15:56:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:11.681 15:56:42 -- scripts/common.sh@335 -- # IFS=.-: 00:06:11.681 15:56:42 -- scripts/common.sh@335 -- # read -ra ver1 00:06:11.681 15:56:42 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.681 15:56:42 -- scripts/common.sh@336 -- # read -ra ver2 00:06:11.681 15:56:42 -- scripts/common.sh@337 -- # local 'op=<' 00:06:11.682 15:56:42 -- scripts/common.sh@339 -- # ver1_l=2 00:06:11.682 15:56:42 -- scripts/common.sh@340 -- # ver2_l=1 00:06:11.682 15:56:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:11.682 15:56:42 -- scripts/common.sh@343 -- # case "$op" in 00:06:11.682 15:56:42 -- scripts/common.sh@344 -- # : 1 00:06:11.682 15:56:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:11.682 15:56:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.682 15:56:42 -- scripts/common.sh@364 -- # decimal 1 00:06:11.682 15:56:42 -- scripts/common.sh@352 -- # local d=1 00:06:11.682 15:56:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.682 15:56:42 -- scripts/common.sh@354 -- # echo 1 00:06:11.682 15:56:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:11.682 15:56:42 -- scripts/common.sh@365 -- # decimal 2 00:06:11.682 15:56:42 -- scripts/common.sh@352 -- # local d=2 00:06:11.682 15:56:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.682 15:56:42 -- scripts/common.sh@354 -- # echo 2 00:06:11.682 15:56:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:11.682 15:56:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:11.682 15:56:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:11.682 15:56:42 -- scripts/common.sh@367 -- # return 0 00:06:11.682 15:56:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.682 15:56:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:11.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.682 --rc genhtml_branch_coverage=1 00:06:11.682 --rc genhtml_function_coverage=1 00:06:11.682 --rc genhtml_legend=1 00:06:11.682 --rc geninfo_all_blocks=1 00:06:11.682 --rc geninfo_unexecuted_blocks=1 00:06:11.682 00:06:11.682 ' 00:06:11.682 15:56:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:11.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.682 --rc genhtml_branch_coverage=1 00:06:11.682 --rc genhtml_function_coverage=1 00:06:11.682 --rc genhtml_legend=1 00:06:11.682 --rc geninfo_all_blocks=1 00:06:11.682 --rc geninfo_unexecuted_blocks=1 00:06:11.682 00:06:11.682 ' 00:06:11.682 15:56:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:11.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.682 --rc genhtml_branch_coverage=1 00:06:11.682 --rc genhtml_function_coverage=1 00:06:11.682 --rc genhtml_legend=1 00:06:11.682 --rc geninfo_all_blocks=1 00:06:11.682 --rc geninfo_unexecuted_blocks=1 00:06:11.682 00:06:11.682 ' 00:06:11.682 15:56:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:11.682 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.682 --rc genhtml_branch_coverage=1 00:06:11.682 --rc genhtml_function_coverage=1 00:06:11.682 --rc genhtml_legend=1 00:06:11.682 --rc geninfo_all_blocks=1 00:06:11.682 --rc geninfo_unexecuted_blocks=1 00:06:11.682 00:06:11.682 ' 00:06:11.682 15:56:42 -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:11.682 OK 00:06:11.682 15:56:42 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:11.682 00:06:11.682 real 0m0.211s 00:06:11.682 user 0m0.127s 00:06:11.682 sys 0m0.102s 00:06:11.682 15:56:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:11.682 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.682 ************************************ 00:06:11.682 END TEST rpc_client 00:06:11.682 ************************************ 00:06:11.682 15:56:42 -- spdk/autotest.sh@165 -- # run_test json_config /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.682 15:56:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:11.682 15:56:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:11.682 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.682 ************************************ 00:06:11.682 START TEST json_config 00:06:11.682 ************************************ 00:06:11.682 15:56:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config.sh 00:06:11.682 15:56:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:11.682 15:56:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:11.682 15:56:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:11.942 15:56:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:11.942 15:56:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:11.942 15:56:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:11.942 15:56:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:11.942 15:56:42 -- scripts/common.sh@335 -- # IFS=.-: 00:06:11.942 15:56:42 -- scripts/common.sh@335 -- # read -ra ver1 00:06:11.942 15:56:42 -- scripts/common.sh@336 -- # IFS=.-: 00:06:11.942 15:56:42 -- scripts/common.sh@336 -- # read -ra ver2 00:06:11.942 15:56:42 -- scripts/common.sh@337 -- # local 'op=<' 00:06:11.942 15:56:42 -- scripts/common.sh@339 -- # ver1_l=2 00:06:11.942 15:56:42 -- scripts/common.sh@340 -- # ver2_l=1 00:06:11.942 15:56:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:11.942 15:56:42 -- scripts/common.sh@343 -- # case "$op" in 00:06:11.942 15:56:42 -- scripts/common.sh@344 -- # : 1 00:06:11.942 15:56:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:11.942 15:56:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:11.942 15:56:42 -- scripts/common.sh@364 -- # decimal 1 00:06:11.942 15:56:42 -- scripts/common.sh@352 -- # local d=1 00:06:11.942 15:56:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:11.942 15:56:42 -- scripts/common.sh@354 -- # echo 1 00:06:11.942 15:56:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:11.942 15:56:42 -- scripts/common.sh@365 -- # decimal 2 00:06:11.942 15:56:42 -- scripts/common.sh@352 -- # local d=2 00:06:11.942 15:56:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:11.942 15:56:42 -- scripts/common.sh@354 -- # echo 2 00:06:11.942 15:56:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:11.942 15:56:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:11.942 15:56:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:11.942 15:56:42 -- scripts/common.sh@367 -- # return 0 00:06:11.942 15:56:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:11.942 15:56:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.942 --rc genhtml_branch_coverage=1 00:06:11.942 --rc genhtml_function_coverage=1 00:06:11.942 --rc genhtml_legend=1 00:06:11.942 --rc geninfo_all_blocks=1 00:06:11.942 --rc geninfo_unexecuted_blocks=1 00:06:11.942 00:06:11.942 ' 00:06:11.942 15:56:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.942 --rc genhtml_branch_coverage=1 00:06:11.942 --rc genhtml_function_coverage=1 00:06:11.942 --rc genhtml_legend=1 00:06:11.942 --rc geninfo_all_blocks=1 00:06:11.942 --rc geninfo_unexecuted_blocks=1 00:06:11.942 00:06:11.942 ' 00:06:11.942 15:56:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.942 --rc genhtml_branch_coverage=1 00:06:11.942 --rc genhtml_function_coverage=1 00:06:11.942 --rc genhtml_legend=1 00:06:11.942 --rc geninfo_all_blocks=1 00:06:11.942 --rc geninfo_unexecuted_blocks=1 00:06:11.942 00:06:11.942 ' 00:06:11.942 15:56:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:11.942 --rc genhtml_branch_coverage=1 00:06:11.942 --rc genhtml_function_coverage=1 00:06:11.942 --rc genhtml_legend=1 00:06:11.942 --rc geninfo_all_blocks=1 00:06:11.942 --rc geninfo_unexecuted_blocks=1 00:06:11.942 00:06:11.942 ' 00:06:11.942 15:56:42 -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:11.942 15:56:42 -- nvmf/common.sh@7 -- # uname -s 00:06:11.942 15:56:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:11.942 15:56:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:11.942 15:56:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:11.942 15:56:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:11.942 15:56:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:11.942 15:56:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:11.942 15:56:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:11.942 15:56:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:11.942 15:56:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:11.942 15:56:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:11.942 15:56:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:11.942 15:56:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:11.942 15:56:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:11.942 15:56:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:11.942 15:56:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:11.942 15:56:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:11.942 15:56:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:11.942 15:56:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:11.942 15:56:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:11.942 15:56:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.942 15:56:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.942 15:56:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.942 15:56:42 -- paths/export.sh@5 -- # export PATH 00:06:11.942 15:56:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:11.942 15:56:42 -- nvmf/common.sh@46 -- # : 0 00:06:11.942 15:56:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:11.942 15:56:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:11.942 15:56:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:11.942 15:56:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:11.942 15:56:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:11.942 15:56:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:11.942 15:56:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:11.942 15:56:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:11.942 15:56:42 -- json_config/json_config.sh@10 -- # [[ 0 -eq 1 ]] 00:06:11.942 15:56:42 -- json_config/json_config.sh@14 -- # [[ 0 -ne 1 ]] 00:06:11.942 15:56:42 -- json_config/json_config.sh@14 -- # [[ 0 -eq 1 ]] 00:06:11.943 15:56:42 -- json_config/json_config.sh@25 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:11.943 15:56:42 -- json_config/json_config.sh@30 -- # app_pid=(['target']='' ['initiator']='') 00:06:11.943 15:56:42 -- json_config/json_config.sh@30 -- # declare -A app_pid 00:06:11.943 15:56:42 -- json_config/json_config.sh@31 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:11.943 15:56:42 -- json_config/json_config.sh@31 -- # declare -A app_socket 00:06:11.943 15:56:42 -- json_config/json_config.sh@32 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:11.943 15:56:42 -- json_config/json_config.sh@32 -- # declare -A app_params 00:06:11.943 15:56:42 -- json_config/json_config.sh@33 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json') 00:06:11.943 15:56:42 -- json_config/json_config.sh@33 -- # declare -A configs_path 00:06:11.943 15:56:42 -- json_config/json_config.sh@43 -- # last_event_id=0 00:06:11.943 15:56:42 -- json_config/json_config.sh@418 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:11.943 15:56:42 -- json_config/json_config.sh@419 -- # echo 'INFO: JSON configuration test init' 00:06:11.943 INFO: JSON configuration test init 00:06:11.943 15:56:42 -- json_config/json_config.sh@420 -- # json_config_test_init 00:06:11.943 15:56:42 -- json_config/json_config.sh@315 -- # timing_enter json_config_test_init 00:06:11.943 15:56:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.943 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.943 15:56:42 -- json_config/json_config.sh@316 -- # timing_enter json_config_setup_target 00:06:11.943 15:56:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:11.943 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.943 15:56:42 -- json_config/json_config.sh@318 -- # json_config_test_start_app target --wait-for-rpc 00:06:11.943 15:56:42 -- json_config/json_config.sh@98 -- # local app=target 00:06:11.943 15:56:42 -- json_config/json_config.sh@99 -- # shift 00:06:11.943 15:56:42 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:11.943 15:56:42 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:11.943 15:56:42 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:11.943 15:56:42 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:11.943 15:56:42 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:11.943 15:56:42 -- json_config/json_config.sh@111 -- # app_pid[$app]=1177262 00:06:11.943 15:56:42 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:11.943 Waiting for target to run... 00:06:11.943 15:56:42 -- json_config/json_config.sh@114 -- # waitforlisten 1177262 /var/tmp/spdk_tgt.sock 00:06:11.943 15:56:42 -- common/autotest_common.sh@829 -- # '[' -z 1177262 ']' 00:06:11.943 15:56:42 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:11.943 15:56:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:11.943 15:56:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.943 15:56:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:11.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:11.943 15:56:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.943 15:56:42 -- common/autotest_common.sh@10 -- # set +x 00:06:11.943 [2024-11-20 15:56:42.657793] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:11.943 [2024-11-20 15:56:42.657848] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1177262 ] 00:06:11.943 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.202 [2024-11-20 15:56:42.974098] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.202 [2024-11-20 15:56:42.994827] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:12.202 [2024-11-20 15:56:42.994941] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.772 15:56:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:12.772 15:56:43 -- common/autotest_common.sh@862 -- # return 0 00:06:12.772 15:56:43 -- json_config/json_config.sh@115 -- # echo '' 00:06:12.772 00:06:12.772 15:56:43 -- json_config/json_config.sh@322 -- # create_accel_config 00:06:12.772 15:56:43 -- json_config/json_config.sh@146 -- # timing_enter create_accel_config 00:06:12.772 15:56:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:12.772 15:56:43 -- common/autotest_common.sh@10 -- # set +x 00:06:12.772 15:56:43 -- json_config/json_config.sh@148 -- # [[ 0 -eq 1 ]] 00:06:12.772 15:56:43 -- json_config/json_config.sh@154 -- # timing_exit create_accel_config 00:06:12.772 15:56:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:12.772 15:56:43 -- common/autotest_common.sh@10 -- # set +x 00:06:12.772 15:56:43 -- json_config/json_config.sh@326 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:12.772 15:56:43 -- json_config/json_config.sh@327 -- # tgt_rpc load_config 00:06:12.772 15:56:43 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:16.065 15:56:46 -- json_config/json_config.sh@329 -- # tgt_check_notification_types 00:06:16.065 15:56:46 -- json_config/json_config.sh@46 -- # timing_enter tgt_check_notification_types 00:06:16.065 15:56:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.065 15:56:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.065 15:56:46 -- json_config/json_config.sh@48 -- # local ret=0 00:06:16.065 15:56:46 -- json_config/json_config.sh@49 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:16.065 15:56:46 -- json_config/json_config.sh@49 -- # local enabled_types 00:06:16.065 15:56:46 -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:16.066 15:56:46 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:16.066 15:56:46 -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:16.066 15:56:46 -- json_config/json_config.sh@51 -- # get_types=('bdev_register' 'bdev_unregister') 00:06:16.066 15:56:46 -- json_config/json_config.sh@51 -- # local get_types 00:06:16.066 15:56:46 -- json_config/json_config.sh@52 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:06:16.066 15:56:46 -- json_config/json_config.sh@57 -- # timing_exit tgt_check_notification_types 00:06:16.066 15:56:46 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:16.066 15:56:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.066 15:56:46 -- json_config/json_config.sh@58 -- # return 0 00:06:16.066 15:56:46 -- json_config/json_config.sh@331 -- # [[ 0 -eq 1 ]] 00:06:16.066 15:56:46 -- json_config/json_config.sh@335 -- # [[ 0 -eq 1 ]] 00:06:16.066 15:56:46 -- json_config/json_config.sh@339 -- # [[ 0 -eq 1 ]] 00:06:16.066 15:56:46 -- json_config/json_config.sh@343 -- # [[ 1 -eq 1 ]] 00:06:16.066 15:56:46 -- json_config/json_config.sh@344 -- # create_nvmf_subsystem_config 00:06:16.066 15:56:46 -- json_config/json_config.sh@283 -- # timing_enter create_nvmf_subsystem_config 00:06:16.066 15:56:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:16.066 15:56:46 -- common/autotest_common.sh@10 -- # set +x 00:06:16.066 15:56:46 -- json_config/json_config.sh@285 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:16.066 15:56:46 -- json_config/json_config.sh@286 -- # [[ rdma == \r\d\m\a ]] 00:06:16.066 15:56:46 -- json_config/json_config.sh@287 -- # TEST_TRANSPORT=rdma 00:06:16.066 15:56:46 -- json_config/json_config.sh@287 -- # nvmftestinit 00:06:16.066 15:56:46 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:06:16.066 15:56:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:16.066 15:56:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:06:16.066 15:56:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:06:16.066 15:56:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:06:16.066 15:56:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:16.066 15:56:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:06:16.066 15:56:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:16.066 15:56:46 -- nvmf/common.sh@402 -- # [[ phy-fallback != virt ]] 00:06:16.066 15:56:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:06:16.066 15:56:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:06:16.066 15:56:46 -- common/autotest_common.sh@10 -- # set +x 00:06:24.190 15:56:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:06:24.190 15:56:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:06:24.190 15:56:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:06:24.190 15:56:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:06:24.190 15:56:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:06:24.190 15:56:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:06:24.190 15:56:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:06:24.190 15:56:53 -- nvmf/common.sh@294 -- # net_devs=() 00:06:24.190 15:56:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:06:24.190 15:56:53 -- nvmf/common.sh@295 -- # e810=() 00:06:24.190 15:56:53 -- nvmf/common.sh@295 -- # local -ga e810 00:06:24.190 15:56:53 -- nvmf/common.sh@296 -- # x722=() 00:06:24.190 15:56:53 -- nvmf/common.sh@296 -- # local -ga x722 00:06:24.190 15:56:53 -- nvmf/common.sh@297 -- # mlx=() 00:06:24.190 15:56:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:06:24.190 15:56:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:24.190 15:56:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:06:24.190 15:56:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:06:24.190 15:56:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:06:24.190 15:56:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:06:24.190 15:56:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:06:24.190 15:56:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:24.190 15:56:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:06:24.190 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:06:24.190 15:56:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:06:24.190 15:56:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:06:24.190 15:56:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:06:24.190 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:06:24.190 15:56:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:06:24.190 15:56:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:06:24.190 15:56:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:24.190 15:56:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.190 15:56:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:24.190 15:56:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.190 15:56:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:06:24.190 Found net devices under 0000:d9:00.0: mlx_0_0 00:06:24.190 15:56:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.190 15:56:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:06:24.190 15:56:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:24.190 15:56:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:06:24.190 15:56:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:24.190 15:56:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:06:24.190 Found net devices under 0000:d9:00.1: mlx_0_1 00:06:24.190 15:56:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:06:24.190 15:56:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:06:24.190 15:56:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:06:24.190 15:56:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:06:24.190 15:56:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:06:24.190 15:56:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:06:24.190 15:56:53 -- nvmf/common.sh@57 -- # uname 00:06:24.190 15:56:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:06:24.190 15:56:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:06:24.191 15:56:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:06:24.191 15:56:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:06:24.191 15:56:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:06:24.191 15:56:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:06:24.191 15:56:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:06:24.191 15:56:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:06:24.191 15:56:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:06:24.191 15:56:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:06:24.191 15:56:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:06:24.191 15:56:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:24.191 15:56:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:06:24.191 15:56:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:06:24.191 15:56:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:24.191 15:56:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:06:24.191 15:56:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:24.191 15:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:24.191 15:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:24.191 15:56:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:06:24.191 15:56:53 -- nvmf/common.sh@104 -- # continue 2 00:06:24.191 15:56:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:24.191 15:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:24.191 15:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:24.191 15:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:24.191 15:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:24.191 15:56:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:06:24.191 15:56:53 -- nvmf/common.sh@104 -- # continue 2 00:06:24.191 15:56:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:06:24.191 15:56:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:06:24.191 15:56:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:06:24.191 15:56:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:06:24.191 15:56:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:24.191 15:56:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:24.191 15:56:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:06:24.191 15:56:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:06:24.191 15:56:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:06:24.191 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:24.191 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:06:24.191 altname enp217s0f0np0 00:06:24.191 altname ens818f0np0 00:06:24.191 inet 192.168.100.8/24 scope global mlx_0_0 00:06:24.191 valid_lft forever preferred_lft forever 00:06:24.191 15:56:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:06:24.191 15:56:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:06:24.191 15:56:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:06:24.191 15:56:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:06:24.191 15:56:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:24.191 15:56:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:24.217 15:56:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:06:24.217 15:56:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:06:24.217 15:56:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:06:24.217 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:06:24.217 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:06:24.217 altname enp217s0f1np1 00:06:24.217 altname ens818f1np1 00:06:24.217 inet 192.168.100.9/24 scope global mlx_0_1 00:06:24.217 valid_lft forever preferred_lft forever 00:06:24.217 15:56:53 -- nvmf/common.sh@410 -- # return 0 00:06:24.217 15:56:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:06:24.217 15:56:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:06:24.217 15:56:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:06:24.217 15:56:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:06:24.217 15:56:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:06:24.217 15:56:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:06:24.217 15:56:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:06:24.217 15:56:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:06:24.217 15:56:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:06:24.217 15:56:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:06:24.217 15:56:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:24.217 15:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:24.217 15:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:06:24.217 15:56:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:06:24.217 15:56:53 -- nvmf/common.sh@104 -- # continue 2 00:06:24.217 15:56:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:06:24.217 15:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:24.217 15:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:06:24.217 15:56:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:06:24.217 15:56:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:06:24.217 15:56:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:06:24.217 15:56:53 -- nvmf/common.sh@104 -- # continue 2 00:06:24.217 15:56:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:06:24.217 15:56:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:06:24.217 15:56:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:06:24.217 15:56:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:06:24.217 15:56:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:24.217 15:56:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:24.217 15:56:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:06:24.217 15:56:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:06:24.217 15:56:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:06:24.217 15:56:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:06:24.217 15:56:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:06:24.217 15:56:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:06:24.217 15:56:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:06:24.217 192.168.100.9' 00:06:24.217 15:56:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:06:24.217 192.168.100.9' 00:06:24.217 15:56:53 -- nvmf/common.sh@445 -- # head -n 1 00:06:24.217 15:56:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:06:24.217 15:56:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:06:24.217 192.168.100.9' 00:06:24.217 15:56:53 -- nvmf/common.sh@446 -- # tail -n +2 00:06:24.217 15:56:53 -- nvmf/common.sh@446 -- # head -n 1 00:06:24.217 15:56:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:06:24.217 15:56:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:06:24.217 15:56:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:06:24.217 15:56:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:06:24.217 15:56:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:06:24.217 15:56:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:06:24.217 15:56:53 -- json_config/json_config.sh@290 -- # [[ -z 192.168.100.8 ]] 00:06:24.217 15:56:53 -- json_config/json_config.sh@295 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.217 15:56:53 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:24.217 MallocForNvmf0 00:06:24.217 15:56:54 -- json_config/json_config.sh@296 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:24.217 15:56:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:24.217 MallocForNvmf1 00:06:24.217 15:56:54 -- json_config/json_config.sh@298 -- # tgt_rpc nvmf_create_transport -t rdma -u 8192 -c 0 00:06:24.217 15:56:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t rdma -u 8192 -c 0 00:06:24.218 [2024-11-20 15:56:54.404770] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:06:24.218 [2024-11-20 15:56:54.444541] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17db560/0x17e81c0) succeed. 00:06:24.218 [2024-11-20 15:56:54.458679] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17dd700/0x1829860) succeed. 00:06:24.218 15:56:54 -- json_config/json_config.sh@299 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:24.218 15:56:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:24.218 15:56:54 -- json_config/json_config.sh@300 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:24.218 15:56:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:24.218 15:56:54 -- json_config/json_config.sh@301 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:24.218 15:56:54 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:24.477 15:56:55 -- json_config/json_config.sh@302 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:24.477 15:56:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:06:24.477 [2024-11-20 15:56:55.243151] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:24.477 15:56:55 -- json_config/json_config.sh@304 -- # timing_exit create_nvmf_subsystem_config 00:06:24.477 15:56:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.477 15:56:55 -- common/autotest_common.sh@10 -- # set +x 00:06:24.736 15:56:55 -- json_config/json_config.sh@346 -- # timing_exit json_config_setup_target 00:06:24.736 15:56:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.736 15:56:55 -- common/autotest_common.sh@10 -- # set +x 00:06:24.736 15:56:55 -- json_config/json_config.sh@348 -- # [[ 0 -eq 1 ]] 00:06:24.736 15:56:55 -- json_config/json_config.sh@353 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:24.736 15:56:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:24.736 MallocBdevForConfigChangeCheck 00:06:24.996 15:56:55 -- json_config/json_config.sh@355 -- # timing_exit json_config_test_init 00:06:24.996 15:56:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:24.996 15:56:55 -- common/autotest_common.sh@10 -- # set +x 00:06:24.996 15:56:55 -- json_config/json_config.sh@422 -- # tgt_rpc save_config 00:06:24.996 15:56:55 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:25.262 15:56:55 -- json_config/json_config.sh@424 -- # echo 'INFO: shutting down applications...' 00:06:25.262 INFO: shutting down applications... 00:06:25.262 15:56:55 -- json_config/json_config.sh@425 -- # [[ 0 -eq 1 ]] 00:06:25.262 15:56:55 -- json_config/json_config.sh@431 -- # json_config_clear target 00:06:25.262 15:56:55 -- json_config/json_config.sh@385 -- # [[ -n 22 ]] 00:06:25.262 15:56:55 -- json_config/json_config.sh@386 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:27.800 Calling clear_iscsi_subsystem 00:06:27.800 Calling clear_nvmf_subsystem 00:06:27.800 Calling clear_nbd_subsystem 00:06:27.800 Calling clear_ublk_subsystem 00:06:27.800 Calling clear_vhost_blk_subsystem 00:06:27.800 Calling clear_vhost_scsi_subsystem 00:06:27.800 Calling clear_scheduler_subsystem 00:06:27.800 Calling clear_bdev_subsystem 00:06:27.800 Calling clear_accel_subsystem 00:06:27.800 Calling clear_vmd_subsystem 00:06:27.800 Calling clear_sock_subsystem 00:06:27.800 Calling clear_iobuf_subsystem 00:06:27.800 15:56:58 -- json_config/json_config.sh@390 -- # local config_filter=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py 00:06:27.800 15:56:58 -- json_config/json_config.sh@396 -- # count=100 00:06:27.800 15:56:58 -- json_config/json_config.sh@397 -- # '[' 100 -gt 0 ']' 00:06:27.800 15:56:58 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:27.800 15:56:58 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:27.800 15:56:58 -- json_config/json_config.sh@398 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:28.059 15:56:58 -- json_config/json_config.sh@398 -- # break 00:06:28.059 15:56:58 -- json_config/json_config.sh@403 -- # '[' 100 -eq 0 ']' 00:06:28.059 15:56:58 -- json_config/json_config.sh@432 -- # json_config_test_shutdown_app target 00:06:28.059 15:56:58 -- json_config/json_config.sh@120 -- # local app=target 00:06:28.059 15:56:58 -- json_config/json_config.sh@123 -- # [[ -n 22 ]] 00:06:28.059 15:56:58 -- json_config/json_config.sh@124 -- # [[ -n 1177262 ]] 00:06:28.059 15:56:58 -- json_config/json_config.sh@127 -- # kill -SIGINT 1177262 00:06:28.059 15:56:58 -- json_config/json_config.sh@129 -- # (( i = 0 )) 00:06:28.059 15:56:58 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:28.059 15:56:58 -- json_config/json_config.sh@130 -- # kill -0 1177262 00:06:28.059 15:56:58 -- json_config/json_config.sh@134 -- # sleep 0.5 00:06:28.632 15:56:59 -- json_config/json_config.sh@129 -- # (( i++ )) 00:06:28.632 15:56:59 -- json_config/json_config.sh@129 -- # (( i < 30 )) 00:06:28.632 15:56:59 -- json_config/json_config.sh@130 -- # kill -0 1177262 00:06:28.632 15:56:59 -- json_config/json_config.sh@131 -- # app_pid[$app]= 00:06:28.632 15:56:59 -- json_config/json_config.sh@132 -- # break 00:06:28.632 15:56:59 -- json_config/json_config.sh@137 -- # [[ -n '' ]] 00:06:28.632 15:56:59 -- json_config/json_config.sh@142 -- # echo 'SPDK target shutdown done' 00:06:28.632 SPDK target shutdown done 00:06:28.632 15:56:59 -- json_config/json_config.sh@434 -- # echo 'INFO: relaunching applications...' 00:06:28.632 INFO: relaunching applications... 00:06:28.632 15:56:59 -- json_config/json_config.sh@435 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:28.632 15:56:59 -- json_config/json_config.sh@98 -- # local app=target 00:06:28.632 15:56:59 -- json_config/json_config.sh@99 -- # shift 00:06:28.632 15:56:59 -- json_config/json_config.sh@101 -- # [[ -n 22 ]] 00:06:28.632 15:56:59 -- json_config/json_config.sh@102 -- # [[ -z '' ]] 00:06:28.632 15:56:59 -- json_config/json_config.sh@104 -- # local app_extra_params= 00:06:28.632 15:56:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:28.632 15:56:59 -- json_config/json_config.sh@105 -- # [[ 0 -eq 1 ]] 00:06:28.632 15:56:59 -- json_config/json_config.sh@111 -- # app_pid[$app]=1182304 00:06:28.632 15:56:59 -- json_config/json_config.sh@113 -- # echo 'Waiting for target to run...' 00:06:28.632 Waiting for target to run... 00:06:28.632 15:56:59 -- json_config/json_config.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:28.632 15:56:59 -- json_config/json_config.sh@114 -- # waitforlisten 1182304 /var/tmp/spdk_tgt.sock 00:06:28.632 15:56:59 -- common/autotest_common.sh@829 -- # '[' -z 1182304 ']' 00:06:28.632 15:56:59 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:28.632 15:56:59 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:28.632 15:56:59 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:28.632 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:28.632 15:56:59 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:28.632 15:56:59 -- common/autotest_common.sh@10 -- # set +x 00:06:28.632 [2024-11-20 15:56:59.339623] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:28.632 [2024-11-20 15:56:59.339684] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1182304 ] 00:06:28.632 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.199 [2024-11-20 15:56:59.798256] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.199 [2024-11-20 15:56:59.827951] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:29.199 [2024-11-20 15:56:59.828071] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.489 [2024-11-20 15:57:02.862935] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21b4fb0/0x21733f0) succeed. 00:06:32.489 [2024-11-20 15:57:02.874467] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21b7150/0x2020f90) succeed. 00:06:32.489 [2024-11-20 15:57:02.929822] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:06:32.748 15:57:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.748 15:57:03 -- common/autotest_common.sh@862 -- # return 0 00:06:32.748 15:57:03 -- json_config/json_config.sh@115 -- # echo '' 00:06:32.748 00:06:32.748 15:57:03 -- json_config/json_config.sh@436 -- # [[ 0 -eq 1 ]] 00:06:32.748 15:57:03 -- json_config/json_config.sh@440 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:32.748 INFO: Checking if target configuration is the same... 00:06:32.748 15:57:03 -- json_config/json_config.sh@441 -- # tgt_rpc save_config 00:06:32.748 15:57:03 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:32.748 15:57:03 -- json_config/json_config.sh@441 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.748 + '[' 2 -ne 2 ']' 00:06:32.748 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:32.748 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:32.748 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:32.748 +++ basename /dev/fd/62 00:06:32.748 ++ mktemp /tmp/62.XXX 00:06:32.748 + tmp_file_1=/tmp/62.QyA 00:06:32.748 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:32.748 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:33.043 + tmp_file_2=/tmp/spdk_tgt_config.json.Cfe 00:06:33.043 + ret=0 00:06:33.043 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.331 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.331 + diff -u /tmp/62.QyA /tmp/spdk_tgt_config.json.Cfe 00:06:33.331 + echo 'INFO: JSON config files are the same' 00:06:33.331 INFO: JSON config files are the same 00:06:33.331 + rm /tmp/62.QyA /tmp/spdk_tgt_config.json.Cfe 00:06:33.331 + exit 0 00:06:33.331 15:57:03 -- json_config/json_config.sh@442 -- # [[ 0 -eq 1 ]] 00:06:33.331 15:57:03 -- json_config/json_config.sh@447 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:33.331 INFO: changing configuration and checking if this can be detected... 00:06:33.331 15:57:03 -- json_config/json_config.sh@449 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:33.331 15:57:03 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:33.331 15:57:04 -- json_config/json_config.sh@450 -- # tgt_rpc save_config 00:06:33.331 15:57:04 -- json_config/json_config.sh@450 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.331 15:57:04 -- json_config/json_config.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:33.331 + '[' 2 -ne 2 ']' 00:06:33.331 +++ dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:33.331 ++ readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/../.. 00:06:33.331 + rootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:06:33.331 +++ basename /dev/fd/62 00:06:33.331 ++ mktemp /tmp/62.XXX 00:06:33.331 + tmp_file_1=/tmp/62.SkX 00:06:33.331 +++ basename /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:33.331 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:33.331 + tmp_file_2=/tmp/spdk_tgt_config.json.4Xp 00:06:33.331 + ret=0 00:06:33.332 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.591 + /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:33.852 + diff -u /tmp/62.SkX /tmp/spdk_tgt_config.json.4Xp 00:06:33.852 + ret=1 00:06:33.852 + echo '=== Start of file: /tmp/62.SkX ===' 00:06:33.852 + cat /tmp/62.SkX 00:06:33.852 + echo '=== End of file: /tmp/62.SkX ===' 00:06:33.852 + echo '' 00:06:33.852 + echo '=== Start of file: /tmp/spdk_tgt_config.json.4Xp ===' 00:06:33.852 + cat /tmp/spdk_tgt_config.json.4Xp 00:06:33.852 + echo '=== End of file: /tmp/spdk_tgt_config.json.4Xp ===' 00:06:33.852 + echo '' 00:06:33.852 + rm /tmp/62.SkX /tmp/spdk_tgt_config.json.4Xp 00:06:33.852 + exit 1 00:06:33.852 15:57:04 -- json_config/json_config.sh@454 -- # echo 'INFO: configuration change detected.' 00:06:33.852 INFO: configuration change detected. 00:06:33.852 15:57:04 -- json_config/json_config.sh@457 -- # json_config_test_fini 00:06:33.852 15:57:04 -- json_config/json_config.sh@359 -- # timing_enter json_config_test_fini 00:06:33.852 15:57:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:33.852 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.852 15:57:04 -- json_config/json_config.sh@360 -- # local ret=0 00:06:33.852 15:57:04 -- json_config/json_config.sh@362 -- # [[ -n '' ]] 00:06:33.852 15:57:04 -- json_config/json_config.sh@370 -- # [[ -n 1182304 ]] 00:06:33.852 15:57:04 -- json_config/json_config.sh@373 -- # cleanup_bdev_subsystem_config 00:06:33.852 15:57:04 -- json_config/json_config.sh@237 -- # timing_enter cleanup_bdev_subsystem_config 00:06:33.852 15:57:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:33.852 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.852 15:57:04 -- json_config/json_config.sh@239 -- # [[ 0 -eq 1 ]] 00:06:33.852 15:57:04 -- json_config/json_config.sh@246 -- # uname -s 00:06:33.852 15:57:04 -- json_config/json_config.sh@246 -- # [[ Linux = Linux ]] 00:06:33.852 15:57:04 -- json_config/json_config.sh@247 -- # rm -f /sample_aio 00:06:33.852 15:57:04 -- json_config/json_config.sh@250 -- # [[ 0 -eq 1 ]] 00:06:33.852 15:57:04 -- json_config/json_config.sh@254 -- # timing_exit cleanup_bdev_subsystem_config 00:06:33.852 15:57:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:33.852 15:57:04 -- common/autotest_common.sh@10 -- # set +x 00:06:33.852 15:57:04 -- json_config/json_config.sh@376 -- # killprocess 1182304 00:06:33.852 15:57:04 -- common/autotest_common.sh@936 -- # '[' -z 1182304 ']' 00:06:33.852 15:57:04 -- common/autotest_common.sh@940 -- # kill -0 1182304 00:06:33.852 15:57:04 -- common/autotest_common.sh@941 -- # uname 00:06:33.852 15:57:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:33.852 15:57:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1182304 00:06:33.852 15:57:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:33.852 15:57:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:33.852 15:57:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1182304' 00:06:33.852 killing process with pid 1182304 00:06:33.852 15:57:04 -- common/autotest_common.sh@955 -- # kill 1182304 00:06:33.852 15:57:04 -- common/autotest_common.sh@960 -- # wait 1182304 00:06:36.386 15:57:07 -- json_config/json_config.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-phy-autotest/spdk/spdk_tgt_config.json 00:06:36.386 15:57:07 -- json_config/json_config.sh@380 -- # timing_exit json_config_test_fini 00:06:36.386 15:57:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.386 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:06:36.386 15:57:07 -- json_config/json_config.sh@381 -- # return 0 00:06:36.386 15:57:07 -- json_config/json_config.sh@459 -- # echo 'INFO: Success' 00:06:36.386 INFO: Success 00:06:36.386 15:57:07 -- json_config/json_config.sh@1 -- # nvmftestfini 00:06:36.386 15:57:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:06:36.386 15:57:07 -- nvmf/common.sh@116 -- # sync 00:06:36.386 15:57:07 -- nvmf/common.sh@118 -- # '[' '' == tcp ']' 00:06:36.386 15:57:07 -- nvmf/common.sh@118 -- # '[' '' == rdma ']' 00:06:36.386 15:57:07 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:06:36.386 15:57:07 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:06:36.386 15:57:07 -- nvmf/common.sh@483 -- # [[ '' == \t\c\p ]] 00:06:36.386 00:06:36.386 real 0m24.734s 00:06:36.386 user 0m27.780s 00:06:36.387 sys 0m7.775s 00:06:36.387 15:57:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:36.387 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:06:36.387 ************************************ 00:06:36.387 END TEST json_config 00:06:36.387 ************************************ 00:06:36.387 15:57:07 -- spdk/autotest.sh@166 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.387 15:57:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:36.387 15:57:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:36.387 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:06:36.387 ************************************ 00:06:36.387 START TEST json_config_extra_key 00:06:36.387 ************************************ 00:06:36.387 15:57:07 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:36.647 15:57:07 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:36.647 15:57:07 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:36.647 15:57:07 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:36.647 15:57:07 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:36.647 15:57:07 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:36.647 15:57:07 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:36.647 15:57:07 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:36.647 15:57:07 -- scripts/common.sh@335 -- # IFS=.-: 00:06:36.647 15:57:07 -- scripts/common.sh@335 -- # read -ra ver1 00:06:36.647 15:57:07 -- scripts/common.sh@336 -- # IFS=.-: 00:06:36.647 15:57:07 -- scripts/common.sh@336 -- # read -ra ver2 00:06:36.647 15:57:07 -- scripts/common.sh@337 -- # local 'op=<' 00:06:36.647 15:57:07 -- scripts/common.sh@339 -- # ver1_l=2 00:06:36.647 15:57:07 -- scripts/common.sh@340 -- # ver2_l=1 00:06:36.647 15:57:07 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:36.647 15:57:07 -- scripts/common.sh@343 -- # case "$op" in 00:06:36.647 15:57:07 -- scripts/common.sh@344 -- # : 1 00:06:36.647 15:57:07 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:36.647 15:57:07 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:36.647 15:57:07 -- scripts/common.sh@364 -- # decimal 1 00:06:36.647 15:57:07 -- scripts/common.sh@352 -- # local d=1 00:06:36.647 15:57:07 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:36.647 15:57:07 -- scripts/common.sh@354 -- # echo 1 00:06:36.647 15:57:07 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:36.647 15:57:07 -- scripts/common.sh@365 -- # decimal 2 00:06:36.647 15:57:07 -- scripts/common.sh@352 -- # local d=2 00:06:36.647 15:57:07 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:36.647 15:57:07 -- scripts/common.sh@354 -- # echo 2 00:06:36.647 15:57:07 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:36.647 15:57:07 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:36.647 15:57:07 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:36.647 15:57:07 -- scripts/common.sh@367 -- # return 0 00:06:36.647 15:57:07 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:36.647 15:57:07 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:36.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.647 --rc genhtml_branch_coverage=1 00:06:36.647 --rc genhtml_function_coverage=1 00:06:36.647 --rc genhtml_legend=1 00:06:36.647 --rc geninfo_all_blocks=1 00:06:36.647 --rc geninfo_unexecuted_blocks=1 00:06:36.647 00:06:36.647 ' 00:06:36.647 15:57:07 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:36.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.647 --rc genhtml_branch_coverage=1 00:06:36.647 --rc genhtml_function_coverage=1 00:06:36.647 --rc genhtml_legend=1 00:06:36.647 --rc geninfo_all_blocks=1 00:06:36.647 --rc geninfo_unexecuted_blocks=1 00:06:36.647 00:06:36.647 ' 00:06:36.647 15:57:07 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:36.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.647 --rc genhtml_branch_coverage=1 00:06:36.647 --rc genhtml_function_coverage=1 00:06:36.647 --rc genhtml_legend=1 00:06:36.647 --rc geninfo_all_blocks=1 00:06:36.647 --rc geninfo_unexecuted_blocks=1 00:06:36.647 00:06:36.647 ' 00:06:36.647 15:57:07 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:36.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:36.647 --rc genhtml_branch_coverage=1 00:06:36.647 --rc genhtml_function_coverage=1 00:06:36.647 --rc genhtml_legend=1 00:06:36.647 --rc geninfo_all_blocks=1 00:06:36.647 --rc geninfo_unexecuted_blocks=1 00:06:36.647 00:06:36.647 ' 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.647 15:57:07 -- nvmf/common.sh@7 -- # uname -s 00:06:36.647 15:57:07 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.647 15:57:07 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.647 15:57:07 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.647 15:57:07 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.647 15:57:07 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.647 15:57:07 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.647 15:57:07 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.647 15:57:07 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.647 15:57:07 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.647 15:57:07 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.647 15:57:07 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:06:36.647 15:57:07 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:06:36.647 15:57:07 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.647 15:57:07 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.647 15:57:07 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:36.647 15:57:07 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:06:36.647 15:57:07 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.647 15:57:07 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.647 15:57:07 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.647 15:57:07 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.647 15:57:07 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.647 15:57:07 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.647 15:57:07 -- paths/export.sh@5 -- # export PATH 00:06:36.647 15:57:07 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.647 15:57:07 -- nvmf/common.sh@46 -- # : 0 00:06:36.647 15:57:07 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:06:36.647 15:57:07 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:06:36.647 15:57:07 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:06:36.647 15:57:07 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.647 15:57:07 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.647 15:57:07 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:06:36.647 15:57:07 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:06:36.647 15:57:07 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@16 -- # app_pid=(['target']='') 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@16 -- # declare -A app_pid 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@17 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_socket 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@18 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_params 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@19 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@19 -- # declare -A configs_path 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@74 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@76 -- # echo 'INFO: launching applications...' 00:06:36.647 INFO: launching applications... 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@77 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@24 -- # local app=target 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@25 -- # shift 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@27 -- # [[ -n 22 ]] 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@28 -- # [[ -z '' ]] 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@31 -- # app_pid[$app]=1184098 00:06:36.647 15:57:07 -- json_config/json_config_extra_key.sh@33 -- # echo 'Waiting for target to run...' 00:06:36.647 Waiting for target to run... 00:06:36.648 15:57:07 -- json_config/json_config_extra_key.sh@34 -- # waitforlisten 1184098 /var/tmp/spdk_tgt.sock 00:06:36.648 15:57:07 -- json_config/json_config_extra_key.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/extra_key.json 00:06:36.648 15:57:07 -- common/autotest_common.sh@829 -- # '[' -z 1184098 ']' 00:06:36.648 15:57:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:36.648 15:57:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.648 15:57:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:36.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:36.648 15:57:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.648 15:57:07 -- common/autotest_common.sh@10 -- # set +x 00:06:36.648 [2024-11-20 15:57:07.429580] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:36.648 [2024-11-20 15:57:07.429635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184098 ] 00:06:36.907 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.167 [2024-11-20 15:57:07.734714] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.167 [2024-11-20 15:57:07.755223] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:37.167 [2024-11-20 15:57:07.755337] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.736 15:57:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.736 15:57:08 -- common/autotest_common.sh@862 -- # return 0 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@35 -- # echo '' 00:06:37.736 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@79 -- # echo 'INFO: shutting down applications...' 00:06:37.736 INFO: shutting down applications... 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@80 -- # json_config_test_shutdown_app target 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@40 -- # local app=target 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@43 -- # [[ -n 22 ]] 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@44 -- # [[ -n 1184098 ]] 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@47 -- # kill -SIGINT 1184098 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@49 -- # (( i = 0 )) 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1184098 00:06:37.736 15:57:08 -- json_config/json_config_extra_key.sh@54 -- # sleep 0.5 00:06:37.996 15:57:08 -- json_config/json_config_extra_key.sh@49 -- # (( i++ )) 00:06:37.996 15:57:08 -- json_config/json_config_extra_key.sh@49 -- # (( i < 30 )) 00:06:37.996 15:57:08 -- json_config/json_config_extra_key.sh@50 -- # kill -0 1184098 00:06:37.996 15:57:08 -- json_config/json_config_extra_key.sh@51 -- # app_pid[$app]= 00:06:37.996 15:57:08 -- json_config/json_config_extra_key.sh@52 -- # break 00:06:37.996 15:57:08 -- json_config/json_config_extra_key.sh@57 -- # [[ -n '' ]] 00:06:37.996 15:57:08 -- json_config/json_config_extra_key.sh@62 -- # echo 'SPDK target shutdown done' 00:06:37.996 SPDK target shutdown done 00:06:37.996 15:57:08 -- json_config/json_config_extra_key.sh@82 -- # echo Success 00:06:37.996 Success 00:06:37.996 00:06:37.996 real 0m1.578s 00:06:37.996 user 0m1.299s 00:06:37.996 sys 0m0.448s 00:06:37.996 15:57:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:37.996 15:57:08 -- common/autotest_common.sh@10 -- # set +x 00:06:37.996 ************************************ 00:06:37.996 END TEST json_config_extra_key 00:06:37.996 ************************************ 00:06:38.256 15:57:08 -- spdk/autotest.sh@167 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.256 15:57:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:38.256 15:57:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:38.256 15:57:08 -- common/autotest_common.sh@10 -- # set +x 00:06:38.256 ************************************ 00:06:38.256 START TEST alias_rpc 00:06:38.256 ************************************ 00:06:38.256 15:57:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:38.256 * Looking for test storage... 00:06:38.256 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/alias_rpc 00:06:38.256 15:57:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:38.256 15:57:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:38.256 15:57:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:38.256 15:57:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:38.256 15:57:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:38.256 15:57:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:38.256 15:57:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:38.256 15:57:08 -- scripts/common.sh@335 -- # IFS=.-: 00:06:38.256 15:57:08 -- scripts/common.sh@335 -- # read -ra ver1 00:06:38.256 15:57:08 -- scripts/common.sh@336 -- # IFS=.-: 00:06:38.256 15:57:08 -- scripts/common.sh@336 -- # read -ra ver2 00:06:38.256 15:57:08 -- scripts/common.sh@337 -- # local 'op=<' 00:06:38.256 15:57:08 -- scripts/common.sh@339 -- # ver1_l=2 00:06:38.256 15:57:08 -- scripts/common.sh@340 -- # ver2_l=1 00:06:38.256 15:57:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:38.256 15:57:08 -- scripts/common.sh@343 -- # case "$op" in 00:06:38.256 15:57:08 -- scripts/common.sh@344 -- # : 1 00:06:38.256 15:57:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:38.256 15:57:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:38.256 15:57:08 -- scripts/common.sh@364 -- # decimal 1 00:06:38.256 15:57:09 -- scripts/common.sh@352 -- # local d=1 00:06:38.256 15:57:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:38.256 15:57:09 -- scripts/common.sh@354 -- # echo 1 00:06:38.256 15:57:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:38.256 15:57:09 -- scripts/common.sh@365 -- # decimal 2 00:06:38.256 15:57:09 -- scripts/common.sh@352 -- # local d=2 00:06:38.256 15:57:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:38.256 15:57:09 -- scripts/common.sh@354 -- # echo 2 00:06:38.256 15:57:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:38.256 15:57:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:38.256 15:57:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:38.256 15:57:09 -- scripts/common.sh@367 -- # return 0 00:06:38.256 15:57:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:38.256 15:57:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:38.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.256 --rc genhtml_branch_coverage=1 00:06:38.256 --rc genhtml_function_coverage=1 00:06:38.256 --rc genhtml_legend=1 00:06:38.256 --rc geninfo_all_blocks=1 00:06:38.256 --rc geninfo_unexecuted_blocks=1 00:06:38.256 00:06:38.256 ' 00:06:38.256 15:57:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:38.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.256 --rc genhtml_branch_coverage=1 00:06:38.256 --rc genhtml_function_coverage=1 00:06:38.256 --rc genhtml_legend=1 00:06:38.256 --rc geninfo_all_blocks=1 00:06:38.256 --rc geninfo_unexecuted_blocks=1 00:06:38.256 00:06:38.256 ' 00:06:38.256 15:57:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:38.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.256 --rc genhtml_branch_coverage=1 00:06:38.256 --rc genhtml_function_coverage=1 00:06:38.256 --rc genhtml_legend=1 00:06:38.256 --rc geninfo_all_blocks=1 00:06:38.256 --rc geninfo_unexecuted_blocks=1 00:06:38.256 00:06:38.256 ' 00:06:38.256 15:57:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:38.256 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:38.256 --rc genhtml_branch_coverage=1 00:06:38.256 --rc genhtml_function_coverage=1 00:06:38.256 --rc genhtml_legend=1 00:06:38.256 --rc geninfo_all_blocks=1 00:06:38.256 --rc geninfo_unexecuted_blocks=1 00:06:38.256 00:06:38.256 ' 00:06:38.256 15:57:09 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.256 15:57:09 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1184798 00:06:38.257 15:57:09 -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:38.257 15:57:09 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1184798 00:06:38.257 15:57:09 -- common/autotest_common.sh@829 -- # '[' -z 1184798 ']' 00:06:38.257 15:57:09 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.257 15:57:09 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.257 15:57:09 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.257 15:57:09 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.257 15:57:09 -- common/autotest_common.sh@10 -- # set +x 00:06:38.520 [2024-11-20 15:57:09.066165] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:38.520 [2024-11-20 15:57:09.066222] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1184798 ] 00:06:38.520 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.520 [2024-11-20 15:57:09.149830] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.520 [2024-11-20 15:57:09.188118] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:38.520 [2024-11-20 15:57:09.188240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.093 15:57:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.093 15:57:09 -- common/autotest_common.sh@862 -- # return 0 00:06:39.093 15:57:09 -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:39.352 15:57:10 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1184798 00:06:39.352 15:57:10 -- common/autotest_common.sh@936 -- # '[' -z 1184798 ']' 00:06:39.352 15:57:10 -- common/autotest_common.sh@940 -- # kill -0 1184798 00:06:39.352 15:57:10 -- common/autotest_common.sh@941 -- # uname 00:06:39.352 15:57:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:39.352 15:57:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1184798 00:06:39.611 15:57:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:39.611 15:57:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:39.611 15:57:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1184798' 00:06:39.611 killing process with pid 1184798 00:06:39.611 15:57:10 -- common/autotest_common.sh@955 -- # kill 1184798 00:06:39.611 15:57:10 -- common/autotest_common.sh@960 -- # wait 1184798 00:06:39.869 00:06:39.869 real 0m1.636s 00:06:39.869 user 0m1.731s 00:06:39.869 sys 0m0.506s 00:06:39.869 15:57:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:39.869 15:57:10 -- common/autotest_common.sh@10 -- # set +x 00:06:39.869 ************************************ 00:06:39.869 END TEST alias_rpc 00:06:39.869 ************************************ 00:06:39.869 15:57:10 -- spdk/autotest.sh@169 -- # [[ 0 -eq 0 ]] 00:06:39.869 15:57:10 -- spdk/autotest.sh@170 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.869 15:57:10 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:39.869 15:57:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:39.869 15:57:10 -- common/autotest_common.sh@10 -- # set +x 00:06:39.869 ************************************ 00:06:39.869 START TEST spdkcli_tcp 00:06:39.869 ************************************ 00:06:39.869 15:57:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:39.869 * Looking for test storage... 00:06:39.869 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:06:39.869 15:57:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:39.869 15:57:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:39.869 15:57:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:40.129 15:57:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:40.129 15:57:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:40.129 15:57:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:40.129 15:57:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:40.129 15:57:10 -- scripts/common.sh@335 -- # IFS=.-: 00:06:40.129 15:57:10 -- scripts/common.sh@335 -- # read -ra ver1 00:06:40.129 15:57:10 -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.129 15:57:10 -- scripts/common.sh@336 -- # read -ra ver2 00:06:40.129 15:57:10 -- scripts/common.sh@337 -- # local 'op=<' 00:06:40.129 15:57:10 -- scripts/common.sh@339 -- # ver1_l=2 00:06:40.129 15:57:10 -- scripts/common.sh@340 -- # ver2_l=1 00:06:40.129 15:57:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:40.129 15:57:10 -- scripts/common.sh@343 -- # case "$op" in 00:06:40.129 15:57:10 -- scripts/common.sh@344 -- # : 1 00:06:40.129 15:57:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:40.129 15:57:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.129 15:57:10 -- scripts/common.sh@364 -- # decimal 1 00:06:40.129 15:57:10 -- scripts/common.sh@352 -- # local d=1 00:06:40.129 15:57:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.129 15:57:10 -- scripts/common.sh@354 -- # echo 1 00:06:40.129 15:57:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:40.129 15:57:10 -- scripts/common.sh@365 -- # decimal 2 00:06:40.129 15:57:10 -- scripts/common.sh@352 -- # local d=2 00:06:40.129 15:57:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.129 15:57:10 -- scripts/common.sh@354 -- # echo 2 00:06:40.129 15:57:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:40.129 15:57:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:40.129 15:57:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:40.129 15:57:10 -- scripts/common.sh@367 -- # return 0 00:06:40.129 15:57:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.129 15:57:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:40.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.129 --rc genhtml_branch_coverage=1 00:06:40.129 --rc genhtml_function_coverage=1 00:06:40.129 --rc genhtml_legend=1 00:06:40.129 --rc geninfo_all_blocks=1 00:06:40.129 --rc geninfo_unexecuted_blocks=1 00:06:40.129 00:06:40.129 ' 00:06:40.129 15:57:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:40.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.129 --rc genhtml_branch_coverage=1 00:06:40.129 --rc genhtml_function_coverage=1 00:06:40.129 --rc genhtml_legend=1 00:06:40.129 --rc geninfo_all_blocks=1 00:06:40.129 --rc geninfo_unexecuted_blocks=1 00:06:40.129 00:06:40.129 ' 00:06:40.129 15:57:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:40.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.129 --rc genhtml_branch_coverage=1 00:06:40.129 --rc genhtml_function_coverage=1 00:06:40.129 --rc genhtml_legend=1 00:06:40.129 --rc geninfo_all_blocks=1 00:06:40.129 --rc geninfo_unexecuted_blocks=1 00:06:40.129 00:06:40.129 ' 00:06:40.129 15:57:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:40.129 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.129 --rc genhtml_branch_coverage=1 00:06:40.129 --rc genhtml_function_coverage=1 00:06:40.129 --rc genhtml_legend=1 00:06:40.129 --rc geninfo_all_blocks=1 00:06:40.129 --rc geninfo_unexecuted_blocks=1 00:06:40.129 00:06:40.129 ' 00:06:40.129 15:57:10 -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:06:40.129 15:57:10 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:40.129 15:57:10 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:06:40.129 15:57:10 -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:40.129 15:57:10 -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:40.129 15:57:10 -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:40.129 15:57:10 -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:40.129 15:57:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:40.129 15:57:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.129 15:57:10 -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1185249 00:06:40.129 15:57:10 -- spdkcli/tcp.sh@27 -- # waitforlisten 1185249 00:06:40.129 15:57:10 -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:40.129 15:57:10 -- common/autotest_common.sh@829 -- # '[' -z 1185249 ']' 00:06:40.129 15:57:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.129 15:57:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:40.129 15:57:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.129 15:57:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:40.129 15:57:10 -- common/autotest_common.sh@10 -- # set +x 00:06:40.129 [2024-11-20 15:57:10.751368] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:40.129 [2024-11-20 15:57:10.751421] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185249 ] 00:06:40.129 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.129 [2024-11-20 15:57:10.836881] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:40.129 [2024-11-20 15:57:10.875385] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:40.129 [2024-11-20 15:57:10.875576] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.130 [2024-11-20 15:57:10.875577] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.066 15:57:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:41.066 15:57:11 -- common/autotest_common.sh@862 -- # return 0 00:06:41.066 15:57:11 -- spdkcli/tcp.sh@31 -- # socat_pid=1185269 00:06:41.066 15:57:11 -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:41.066 15:57:11 -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:41.066 [ 00:06:41.066 "bdev_malloc_delete", 00:06:41.066 "bdev_malloc_create", 00:06:41.066 "bdev_null_resize", 00:06:41.066 "bdev_null_delete", 00:06:41.066 "bdev_null_create", 00:06:41.066 "bdev_nvme_cuse_unregister", 00:06:41.066 "bdev_nvme_cuse_register", 00:06:41.066 "bdev_opal_new_user", 00:06:41.066 "bdev_opal_set_lock_state", 00:06:41.066 "bdev_opal_delete", 00:06:41.066 "bdev_opal_get_info", 00:06:41.066 "bdev_opal_create", 00:06:41.066 "bdev_nvme_opal_revert", 00:06:41.066 "bdev_nvme_opal_init", 00:06:41.066 "bdev_nvme_send_cmd", 00:06:41.066 "bdev_nvme_get_path_iostat", 00:06:41.066 "bdev_nvme_get_mdns_discovery_info", 00:06:41.066 "bdev_nvme_stop_mdns_discovery", 00:06:41.066 "bdev_nvme_start_mdns_discovery", 00:06:41.066 "bdev_nvme_set_multipath_policy", 00:06:41.066 "bdev_nvme_set_preferred_path", 00:06:41.066 "bdev_nvme_get_io_paths", 00:06:41.066 "bdev_nvme_remove_error_injection", 00:06:41.066 "bdev_nvme_add_error_injection", 00:06:41.066 "bdev_nvme_get_discovery_info", 00:06:41.066 "bdev_nvme_stop_discovery", 00:06:41.066 "bdev_nvme_start_discovery", 00:06:41.066 "bdev_nvme_get_controller_health_info", 00:06:41.066 "bdev_nvme_disable_controller", 00:06:41.066 "bdev_nvme_enable_controller", 00:06:41.066 "bdev_nvme_reset_controller", 00:06:41.066 "bdev_nvme_get_transport_statistics", 00:06:41.066 "bdev_nvme_apply_firmware", 00:06:41.066 "bdev_nvme_detach_controller", 00:06:41.066 "bdev_nvme_get_controllers", 00:06:41.066 "bdev_nvme_attach_controller", 00:06:41.066 "bdev_nvme_set_hotplug", 00:06:41.066 "bdev_nvme_set_options", 00:06:41.066 "bdev_passthru_delete", 00:06:41.066 "bdev_passthru_create", 00:06:41.066 "bdev_lvol_grow_lvstore", 00:06:41.066 "bdev_lvol_get_lvols", 00:06:41.066 "bdev_lvol_get_lvstores", 00:06:41.066 "bdev_lvol_delete", 00:06:41.066 "bdev_lvol_set_read_only", 00:06:41.066 "bdev_lvol_resize", 00:06:41.066 "bdev_lvol_decouple_parent", 00:06:41.066 "bdev_lvol_inflate", 00:06:41.066 "bdev_lvol_rename", 00:06:41.066 "bdev_lvol_clone_bdev", 00:06:41.066 "bdev_lvol_clone", 00:06:41.066 "bdev_lvol_snapshot", 00:06:41.066 "bdev_lvol_create", 00:06:41.066 "bdev_lvol_delete_lvstore", 00:06:41.066 "bdev_lvol_rename_lvstore", 00:06:41.066 "bdev_lvol_create_lvstore", 00:06:41.066 "bdev_raid_set_options", 00:06:41.066 "bdev_raid_remove_base_bdev", 00:06:41.066 "bdev_raid_add_base_bdev", 00:06:41.066 "bdev_raid_delete", 00:06:41.066 "bdev_raid_create", 00:06:41.066 "bdev_raid_get_bdevs", 00:06:41.066 "bdev_error_inject_error", 00:06:41.066 "bdev_error_delete", 00:06:41.066 "bdev_error_create", 00:06:41.066 "bdev_split_delete", 00:06:41.066 "bdev_split_create", 00:06:41.066 "bdev_delay_delete", 00:06:41.066 "bdev_delay_create", 00:06:41.066 "bdev_delay_update_latency", 00:06:41.066 "bdev_zone_block_delete", 00:06:41.066 "bdev_zone_block_create", 00:06:41.066 "blobfs_create", 00:06:41.066 "blobfs_detect", 00:06:41.066 "blobfs_set_cache_size", 00:06:41.066 "bdev_aio_delete", 00:06:41.066 "bdev_aio_rescan", 00:06:41.066 "bdev_aio_create", 00:06:41.066 "bdev_ftl_set_property", 00:06:41.066 "bdev_ftl_get_properties", 00:06:41.066 "bdev_ftl_get_stats", 00:06:41.066 "bdev_ftl_unmap", 00:06:41.066 "bdev_ftl_unload", 00:06:41.066 "bdev_ftl_delete", 00:06:41.066 "bdev_ftl_load", 00:06:41.066 "bdev_ftl_create", 00:06:41.066 "bdev_virtio_attach_controller", 00:06:41.066 "bdev_virtio_scsi_get_devices", 00:06:41.066 "bdev_virtio_detach_controller", 00:06:41.066 "bdev_virtio_blk_set_hotplug", 00:06:41.066 "bdev_iscsi_delete", 00:06:41.066 "bdev_iscsi_create", 00:06:41.066 "bdev_iscsi_set_options", 00:06:41.066 "accel_error_inject_error", 00:06:41.066 "ioat_scan_accel_module", 00:06:41.066 "dsa_scan_accel_module", 00:06:41.066 "iaa_scan_accel_module", 00:06:41.066 "iscsi_set_options", 00:06:41.066 "iscsi_get_auth_groups", 00:06:41.066 "iscsi_auth_group_remove_secret", 00:06:41.066 "iscsi_auth_group_add_secret", 00:06:41.066 "iscsi_delete_auth_group", 00:06:41.067 "iscsi_create_auth_group", 00:06:41.067 "iscsi_set_discovery_auth", 00:06:41.067 "iscsi_get_options", 00:06:41.067 "iscsi_target_node_request_logout", 00:06:41.067 "iscsi_target_node_set_redirect", 00:06:41.067 "iscsi_target_node_set_auth", 00:06:41.067 "iscsi_target_node_add_lun", 00:06:41.067 "iscsi_get_connections", 00:06:41.067 "iscsi_portal_group_set_auth", 00:06:41.067 "iscsi_start_portal_group", 00:06:41.067 "iscsi_delete_portal_group", 00:06:41.067 "iscsi_create_portal_group", 00:06:41.067 "iscsi_get_portal_groups", 00:06:41.067 "iscsi_delete_target_node", 00:06:41.067 "iscsi_target_node_remove_pg_ig_maps", 00:06:41.067 "iscsi_target_node_add_pg_ig_maps", 00:06:41.067 "iscsi_create_target_node", 00:06:41.067 "iscsi_get_target_nodes", 00:06:41.067 "iscsi_delete_initiator_group", 00:06:41.067 "iscsi_initiator_group_remove_initiators", 00:06:41.067 "iscsi_initiator_group_add_initiators", 00:06:41.067 "iscsi_create_initiator_group", 00:06:41.067 "iscsi_get_initiator_groups", 00:06:41.067 "nvmf_set_crdt", 00:06:41.067 "nvmf_set_config", 00:06:41.067 "nvmf_set_max_subsystems", 00:06:41.067 "nvmf_subsystem_get_listeners", 00:06:41.067 "nvmf_subsystem_get_qpairs", 00:06:41.067 "nvmf_subsystem_get_controllers", 00:06:41.067 "nvmf_get_stats", 00:06:41.067 "nvmf_get_transports", 00:06:41.067 "nvmf_create_transport", 00:06:41.067 "nvmf_get_targets", 00:06:41.067 "nvmf_delete_target", 00:06:41.067 "nvmf_create_target", 00:06:41.067 "nvmf_subsystem_allow_any_host", 00:06:41.067 "nvmf_subsystem_remove_host", 00:06:41.067 "nvmf_subsystem_add_host", 00:06:41.067 "nvmf_subsystem_remove_ns", 00:06:41.067 "nvmf_subsystem_add_ns", 00:06:41.067 "nvmf_subsystem_listener_set_ana_state", 00:06:41.067 "nvmf_discovery_get_referrals", 00:06:41.067 "nvmf_discovery_remove_referral", 00:06:41.067 "nvmf_discovery_add_referral", 00:06:41.067 "nvmf_subsystem_remove_listener", 00:06:41.067 "nvmf_subsystem_add_listener", 00:06:41.067 "nvmf_delete_subsystem", 00:06:41.067 "nvmf_create_subsystem", 00:06:41.067 "nvmf_get_subsystems", 00:06:41.067 "env_dpdk_get_mem_stats", 00:06:41.067 "nbd_get_disks", 00:06:41.067 "nbd_stop_disk", 00:06:41.067 "nbd_start_disk", 00:06:41.067 "ublk_recover_disk", 00:06:41.067 "ublk_get_disks", 00:06:41.067 "ublk_stop_disk", 00:06:41.067 "ublk_start_disk", 00:06:41.067 "ublk_destroy_target", 00:06:41.067 "ublk_create_target", 00:06:41.067 "virtio_blk_create_transport", 00:06:41.067 "virtio_blk_get_transports", 00:06:41.067 "vhost_controller_set_coalescing", 00:06:41.067 "vhost_get_controllers", 00:06:41.067 "vhost_delete_controller", 00:06:41.067 "vhost_create_blk_controller", 00:06:41.067 "vhost_scsi_controller_remove_target", 00:06:41.067 "vhost_scsi_controller_add_target", 00:06:41.067 "vhost_start_scsi_controller", 00:06:41.067 "vhost_create_scsi_controller", 00:06:41.067 "thread_set_cpumask", 00:06:41.067 "framework_get_scheduler", 00:06:41.067 "framework_set_scheduler", 00:06:41.067 "framework_get_reactors", 00:06:41.067 "thread_get_io_channels", 00:06:41.067 "thread_get_pollers", 00:06:41.067 "thread_get_stats", 00:06:41.067 "framework_monitor_context_switch", 00:06:41.067 "spdk_kill_instance", 00:06:41.067 "log_enable_timestamps", 00:06:41.067 "log_get_flags", 00:06:41.067 "log_clear_flag", 00:06:41.067 "log_set_flag", 00:06:41.067 "log_get_level", 00:06:41.067 "log_set_level", 00:06:41.067 "log_get_print_level", 00:06:41.067 "log_set_print_level", 00:06:41.067 "framework_enable_cpumask_locks", 00:06:41.067 "framework_disable_cpumask_locks", 00:06:41.067 "framework_wait_init", 00:06:41.067 "framework_start_init", 00:06:41.067 "scsi_get_devices", 00:06:41.067 "bdev_get_histogram", 00:06:41.067 "bdev_enable_histogram", 00:06:41.067 "bdev_set_qos_limit", 00:06:41.067 "bdev_set_qd_sampling_period", 00:06:41.067 "bdev_get_bdevs", 00:06:41.067 "bdev_reset_iostat", 00:06:41.067 "bdev_get_iostat", 00:06:41.067 "bdev_examine", 00:06:41.067 "bdev_wait_for_examine", 00:06:41.067 "bdev_set_options", 00:06:41.067 "notify_get_notifications", 00:06:41.067 "notify_get_types", 00:06:41.067 "accel_get_stats", 00:06:41.067 "accel_set_options", 00:06:41.067 "accel_set_driver", 00:06:41.067 "accel_crypto_key_destroy", 00:06:41.067 "accel_crypto_keys_get", 00:06:41.067 "accel_crypto_key_create", 00:06:41.067 "accel_assign_opc", 00:06:41.067 "accel_get_module_info", 00:06:41.067 "accel_get_opc_assignments", 00:06:41.067 "vmd_rescan", 00:06:41.067 "vmd_remove_device", 00:06:41.067 "vmd_enable", 00:06:41.067 "sock_set_default_impl", 00:06:41.067 "sock_impl_set_options", 00:06:41.067 "sock_impl_get_options", 00:06:41.067 "iobuf_get_stats", 00:06:41.067 "iobuf_set_options", 00:06:41.067 "framework_get_pci_devices", 00:06:41.067 "framework_get_config", 00:06:41.067 "framework_get_subsystems", 00:06:41.067 "trace_get_info", 00:06:41.067 "trace_get_tpoint_group_mask", 00:06:41.067 "trace_disable_tpoint_group", 00:06:41.067 "trace_enable_tpoint_group", 00:06:41.067 "trace_clear_tpoint_mask", 00:06:41.067 "trace_set_tpoint_mask", 00:06:41.067 "spdk_get_version", 00:06:41.067 "rpc_get_methods" 00:06:41.067 ] 00:06:41.067 15:57:11 -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:41.067 15:57:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:41.067 15:57:11 -- common/autotest_common.sh@10 -- # set +x 00:06:41.067 15:57:11 -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:41.067 15:57:11 -- spdkcli/tcp.sh@38 -- # killprocess 1185249 00:06:41.067 15:57:11 -- common/autotest_common.sh@936 -- # '[' -z 1185249 ']' 00:06:41.067 15:57:11 -- common/autotest_common.sh@940 -- # kill -0 1185249 00:06:41.067 15:57:11 -- common/autotest_common.sh@941 -- # uname 00:06:41.067 15:57:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:41.067 15:57:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1185249 00:06:41.067 15:57:11 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:41.067 15:57:11 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:41.067 15:57:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1185249' 00:06:41.067 killing process with pid 1185249 00:06:41.067 15:57:11 -- common/autotest_common.sh@955 -- # kill 1185249 00:06:41.067 15:57:11 -- common/autotest_common.sh@960 -- # wait 1185249 00:06:41.636 00:06:41.636 real 0m1.625s 00:06:41.636 user 0m2.909s 00:06:41.636 sys 0m0.536s 00:06:41.636 15:57:12 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:41.636 15:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:41.636 ************************************ 00:06:41.636 END TEST spdkcli_tcp 00:06:41.637 ************************************ 00:06:41.637 15:57:12 -- spdk/autotest.sh@173 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.637 15:57:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:41.637 15:57:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:41.637 15:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:41.637 ************************************ 00:06:41.637 START TEST dpdk_mem_utility 00:06:41.637 ************************************ 00:06:41.637 15:57:12 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:41.637 * Looking for test storage... 00:06:41.637 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dpdk_memory_utility 00:06:41.637 15:57:12 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:41.637 15:57:12 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:41.637 15:57:12 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:41.637 15:57:12 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:41.637 15:57:12 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:41.637 15:57:12 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:41.637 15:57:12 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:41.637 15:57:12 -- scripts/common.sh@335 -- # IFS=.-: 00:06:41.637 15:57:12 -- scripts/common.sh@335 -- # read -ra ver1 00:06:41.637 15:57:12 -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.637 15:57:12 -- scripts/common.sh@336 -- # read -ra ver2 00:06:41.637 15:57:12 -- scripts/common.sh@337 -- # local 'op=<' 00:06:41.637 15:57:12 -- scripts/common.sh@339 -- # ver1_l=2 00:06:41.637 15:57:12 -- scripts/common.sh@340 -- # ver2_l=1 00:06:41.637 15:57:12 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:41.637 15:57:12 -- scripts/common.sh@343 -- # case "$op" in 00:06:41.637 15:57:12 -- scripts/common.sh@344 -- # : 1 00:06:41.637 15:57:12 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:41.637 15:57:12 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.637 15:57:12 -- scripts/common.sh@364 -- # decimal 1 00:06:41.637 15:57:12 -- scripts/common.sh@352 -- # local d=1 00:06:41.637 15:57:12 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.637 15:57:12 -- scripts/common.sh@354 -- # echo 1 00:06:41.637 15:57:12 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:41.637 15:57:12 -- scripts/common.sh@365 -- # decimal 2 00:06:41.637 15:57:12 -- scripts/common.sh@352 -- # local d=2 00:06:41.637 15:57:12 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.637 15:57:12 -- scripts/common.sh@354 -- # echo 2 00:06:41.637 15:57:12 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:41.637 15:57:12 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:41.637 15:57:12 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:41.637 15:57:12 -- scripts/common.sh@367 -- # return 0 00:06:41.637 15:57:12 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.637 15:57:12 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:41.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.637 --rc genhtml_branch_coverage=1 00:06:41.637 --rc genhtml_function_coverage=1 00:06:41.637 --rc genhtml_legend=1 00:06:41.637 --rc geninfo_all_blocks=1 00:06:41.637 --rc geninfo_unexecuted_blocks=1 00:06:41.637 00:06:41.637 ' 00:06:41.637 15:57:12 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:41.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.637 --rc genhtml_branch_coverage=1 00:06:41.637 --rc genhtml_function_coverage=1 00:06:41.637 --rc genhtml_legend=1 00:06:41.637 --rc geninfo_all_blocks=1 00:06:41.637 --rc geninfo_unexecuted_blocks=1 00:06:41.637 00:06:41.637 ' 00:06:41.637 15:57:12 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:41.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.637 --rc genhtml_branch_coverage=1 00:06:41.637 --rc genhtml_function_coverage=1 00:06:41.637 --rc genhtml_legend=1 00:06:41.637 --rc geninfo_all_blocks=1 00:06:41.637 --rc geninfo_unexecuted_blocks=1 00:06:41.637 00:06:41.637 ' 00:06:41.637 15:57:12 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:41.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.637 --rc genhtml_branch_coverage=1 00:06:41.637 --rc genhtml_function_coverage=1 00:06:41.637 --rc genhtml_legend=1 00:06:41.637 --rc geninfo_all_blocks=1 00:06:41.637 --rc geninfo_unexecuted_blocks=1 00:06:41.637 00:06:41.637 ' 00:06:41.637 15:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:41.637 15:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1185598 00:06:41.637 15:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt 00:06:41.637 15:57:12 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1185598 00:06:41.637 15:57:12 -- common/autotest_common.sh@829 -- # '[' -z 1185598 ']' 00:06:41.637 15:57:12 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.637 15:57:12 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.637 15:57:12 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.637 15:57:12 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.637 15:57:12 -- common/autotest_common.sh@10 -- # set +x 00:06:41.637 [2024-11-20 15:57:12.408800] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:41.637 [2024-11-20 15:57:12.408854] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185598 ] 00:06:41.637 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.897 [2024-11-20 15:57:12.492144] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.897 [2024-11-20 15:57:12.529197] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:06:41.897 [2024-11-20 15:57:12.529322] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.526 15:57:13 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.526 15:57:13 -- common/autotest_common.sh@862 -- # return 0 00:06:42.526 15:57:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:42.526 15:57:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:42.526 15:57:13 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.526 15:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:42.526 { 00:06:42.526 "filename": "/tmp/spdk_mem_dump.txt" 00:06:42.526 } 00:06:42.526 15:57:13 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.526 15:57:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:42.526 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:42.526 1 heaps totaling size 814.000000 MiB 00:06:42.526 size: 814.000000 MiB heap id: 0 00:06:42.526 end heaps---------- 00:06:42.526 8 mempools totaling size 598.116089 MiB 00:06:42.526 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:42.526 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:42.526 size: 84.521057 MiB name: bdev_io_1185598 00:06:42.526 size: 51.011292 MiB name: evtpool_1185598 00:06:42.526 size: 50.003479 MiB name: msgpool_1185598 00:06:42.526 size: 21.763794 MiB name: PDU_Pool 00:06:42.526 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:42.526 size: 0.026123 MiB name: Session_Pool 00:06:42.526 end mempools------- 00:06:42.526 6 memzones totaling size 4.142822 MiB 00:06:42.526 size: 1.000366 MiB name: RG_ring_0_1185598 00:06:42.526 size: 1.000366 MiB name: RG_ring_1_1185598 00:06:42.526 size: 1.000366 MiB name: RG_ring_4_1185598 00:06:42.526 size: 1.000366 MiB name: RG_ring_5_1185598 00:06:42.526 size: 0.125366 MiB name: RG_ring_2_1185598 00:06:42.526 size: 0.015991 MiB name: RG_ring_3_1185598 00:06:42.526 end memzones------- 00:06:42.526 15:57:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:42.786 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:42.786 list of free elements. size: 12.519348 MiB 00:06:42.786 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:42.786 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:42.786 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:42.786 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:42.786 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:42.786 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:42.786 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:42.786 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:42.786 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:42.786 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:42.786 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:42.786 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:42.786 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:42.786 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:42.786 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:42.786 list of standard malloc elements. size: 199.218079 MiB 00:06:42.786 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:42.786 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:42.786 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:42.786 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:42.786 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:42.786 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:42.786 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:42.786 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:42.786 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:42.786 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:42.786 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:42.786 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:42.786 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:42.786 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:42.787 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:42.787 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:42.787 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:42.787 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:42.787 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:42.787 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:42.787 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:42.787 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:42.787 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:42.787 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:42.787 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:42.787 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:42.787 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:42.787 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:42.787 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:42.787 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:42.787 list of memzone associated elements. size: 602.262573 MiB 00:06:42.787 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:42.787 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:42.787 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:42.787 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:42.787 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:42.787 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1185598_0 00:06:42.787 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:42.787 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1185598_0 00:06:42.787 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:42.787 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1185598_0 00:06:42.787 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:42.787 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:42.787 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:42.787 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:42.787 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:42.787 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1185598 00:06:42.787 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:42.787 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1185598 00:06:42.787 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:42.787 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1185598 00:06:42.787 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:42.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:42.787 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:42.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:42.787 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:42.787 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:42.787 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:42.787 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:42.787 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:42.787 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1185598 00:06:42.787 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:42.787 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1185598 00:06:42.787 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:42.787 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1185598 00:06:42.787 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:42.787 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1185598 00:06:42.787 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:42.787 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1185598 00:06:42.787 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:42.787 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:42.787 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:42.787 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:42.787 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:42.787 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:42.787 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:42.787 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1185598 00:06:42.787 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:42.787 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:42.787 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:42.787 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:42.787 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:42.787 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1185598 00:06:42.787 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:42.787 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:42.787 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:42.787 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1185598 00:06:42.787 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:42.787 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1185598 00:06:42.787 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:42.787 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:42.787 15:57:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:42.787 15:57:13 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1185598 00:06:42.787 15:57:13 -- common/autotest_common.sh@936 -- # '[' -z 1185598 ']' 00:06:42.787 15:57:13 -- common/autotest_common.sh@940 -- # kill -0 1185598 00:06:42.787 15:57:13 -- common/autotest_common.sh@941 -- # uname 00:06:42.787 15:57:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:42.787 15:57:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1185598 00:06:42.787 15:57:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:06:42.787 15:57:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:06:42.787 15:57:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1185598' 00:06:42.787 killing process with pid 1185598 00:06:42.787 15:57:13 -- common/autotest_common.sh@955 -- # kill 1185598 00:06:42.787 15:57:13 -- common/autotest_common.sh@960 -- # wait 1185598 00:06:43.047 00:06:43.047 real 0m1.531s 00:06:43.047 user 0m1.552s 00:06:43.047 sys 0m0.506s 00:06:43.047 15:57:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:43.047 15:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.047 ************************************ 00:06:43.047 END TEST dpdk_mem_utility 00:06:43.047 ************************************ 00:06:43.047 15:57:13 -- spdk/autotest.sh@174 -- # run_test event /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:43.047 15:57:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:43.047 15:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.047 15:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.047 ************************************ 00:06:43.047 START TEST event 00:06:43.047 ************************************ 00:06:43.047 15:57:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event.sh 00:06:43.047 * Looking for test storage... 00:06:43.307 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:06:43.307 15:57:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:43.307 15:57:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:43.307 15:57:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:43.307 15:57:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:43.307 15:57:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:43.307 15:57:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:43.307 15:57:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:43.307 15:57:13 -- scripts/common.sh@335 -- # IFS=.-: 00:06:43.307 15:57:13 -- scripts/common.sh@335 -- # read -ra ver1 00:06:43.307 15:57:13 -- scripts/common.sh@336 -- # IFS=.-: 00:06:43.307 15:57:13 -- scripts/common.sh@336 -- # read -ra ver2 00:06:43.307 15:57:13 -- scripts/common.sh@337 -- # local 'op=<' 00:06:43.307 15:57:13 -- scripts/common.sh@339 -- # ver1_l=2 00:06:43.307 15:57:13 -- scripts/common.sh@340 -- # ver2_l=1 00:06:43.307 15:57:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:43.307 15:57:13 -- scripts/common.sh@343 -- # case "$op" in 00:06:43.307 15:57:13 -- scripts/common.sh@344 -- # : 1 00:06:43.307 15:57:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:43.307 15:57:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:43.307 15:57:13 -- scripts/common.sh@364 -- # decimal 1 00:06:43.307 15:57:13 -- scripts/common.sh@352 -- # local d=1 00:06:43.307 15:57:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:43.307 15:57:13 -- scripts/common.sh@354 -- # echo 1 00:06:43.307 15:57:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:43.307 15:57:13 -- scripts/common.sh@365 -- # decimal 2 00:06:43.307 15:57:13 -- scripts/common.sh@352 -- # local d=2 00:06:43.307 15:57:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:43.307 15:57:13 -- scripts/common.sh@354 -- # echo 2 00:06:43.307 15:57:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:43.307 15:57:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:43.307 15:57:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:43.307 15:57:13 -- scripts/common.sh@367 -- # return 0 00:06:43.307 15:57:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:43.307 15:57:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.307 --rc genhtml_branch_coverage=1 00:06:43.307 --rc genhtml_function_coverage=1 00:06:43.307 --rc genhtml_legend=1 00:06:43.307 --rc geninfo_all_blocks=1 00:06:43.307 --rc geninfo_unexecuted_blocks=1 00:06:43.307 00:06:43.307 ' 00:06:43.307 15:57:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.307 --rc genhtml_branch_coverage=1 00:06:43.307 --rc genhtml_function_coverage=1 00:06:43.307 --rc genhtml_legend=1 00:06:43.307 --rc geninfo_all_blocks=1 00:06:43.307 --rc geninfo_unexecuted_blocks=1 00:06:43.307 00:06:43.307 ' 00:06:43.307 15:57:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.307 --rc genhtml_branch_coverage=1 00:06:43.307 --rc genhtml_function_coverage=1 00:06:43.307 --rc genhtml_legend=1 00:06:43.307 --rc geninfo_all_blocks=1 00:06:43.307 --rc geninfo_unexecuted_blocks=1 00:06:43.307 00:06:43.307 ' 00:06:43.307 15:57:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:43.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:43.307 --rc genhtml_branch_coverage=1 00:06:43.307 --rc genhtml_function_coverage=1 00:06:43.307 --rc genhtml_legend=1 00:06:43.307 --rc geninfo_all_blocks=1 00:06:43.307 --rc geninfo_unexecuted_blocks=1 00:06:43.307 00:06:43.307 ' 00:06:43.307 15:57:13 -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:43.307 15:57:13 -- bdev/nbd_common.sh@6 -- # set -e 00:06:43.307 15:57:13 -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:43.307 15:57:13 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:06:43.307 15:57:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:43.307 15:57:13 -- common/autotest_common.sh@10 -- # set +x 00:06:43.307 ************************************ 00:06:43.307 START TEST event_perf 00:06:43.307 ************************************ 00:06:43.307 15:57:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:43.307 Running I/O for 1 seconds...[2024-11-20 15:57:13.971672] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:43.307 [2024-11-20 15:57:13.971754] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1185929 ] 00:06:43.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.307 [2024-11-20 15:57:14.058827] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:43.307 [2024-11-20 15:57:14.097021] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.307 [2024-11-20 15:57:14.097135] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:43.307 [2024-11-20 15:57:14.097245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.307 [2024-11-20 15:57:14.097246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:44.684 Running I/O for 1 seconds... 00:06:44.684 lcore 0: 212892 00:06:44.684 lcore 1: 212891 00:06:44.684 lcore 2: 212892 00:06:44.684 lcore 3: 212892 00:06:44.684 done. 00:06:44.684 00:06:44.684 real 0m1.206s 00:06:44.684 user 0m4.095s 00:06:44.684 sys 0m0.109s 00:06:44.684 15:57:15 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:44.684 15:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:44.684 ************************************ 00:06:44.684 END TEST event_perf 00:06:44.684 ************************************ 00:06:44.684 15:57:15 -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:44.684 15:57:15 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:44.684 15:57:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:44.684 15:57:15 -- common/autotest_common.sh@10 -- # set +x 00:06:44.684 ************************************ 00:06:44.684 START TEST event_reactor 00:06:44.684 ************************************ 00:06:44.684 15:57:15 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:44.684 [2024-11-20 15:57:15.228016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:44.684 [2024-11-20 15:57:15.228106] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186179 ] 00:06:44.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.684 [2024-11-20 15:57:15.316933] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.684 [2024-11-20 15:57:15.352954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.620 test_start 00:06:45.621 oneshot 00:06:45.621 tick 100 00:06:45.621 tick 100 00:06:45.621 tick 250 00:06:45.621 tick 100 00:06:45.621 tick 100 00:06:45.621 tick 250 00:06:45.621 tick 500 00:06:45.621 tick 100 00:06:45.621 tick 100 00:06:45.621 tick 100 00:06:45.621 tick 250 00:06:45.621 tick 100 00:06:45.621 tick 100 00:06:45.621 test_end 00:06:45.621 00:06:45.621 real 0m1.202s 00:06:45.621 user 0m1.101s 00:06:45.621 sys 0m0.097s 00:06:45.621 15:57:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:45.621 15:57:16 -- common/autotest_common.sh@10 -- # set +x 00:06:45.621 ************************************ 00:06:45.621 END TEST event_reactor 00:06:45.621 ************************************ 00:06:45.880 15:57:16 -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.880 15:57:16 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:06:45.880 15:57:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:45.880 15:57:16 -- common/autotest_common.sh@10 -- # set +x 00:06:45.880 ************************************ 00:06:45.880 START TEST event_reactor_perf 00:06:45.880 ************************************ 00:06:45.880 15:57:16 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:45.880 [2024-11-20 15:57:16.481706] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:45.880 [2024-11-20 15:57:16.481793] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186325 ] 00:06:45.880 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.880 [2024-11-20 15:57:16.569937] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.880 [2024-11-20 15:57:16.606249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.256 test_start 00:06:47.256 test_end 00:06:47.256 Performance: 524095 events per second 00:06:47.256 00:06:47.256 real 0m1.202s 00:06:47.256 user 0m1.096s 00:06:47.256 sys 0m0.102s 00:06:47.256 15:57:17 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:47.256 15:57:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.257 ************************************ 00:06:47.257 END TEST event_reactor_perf 00:06:47.257 ************************************ 00:06:47.257 15:57:17 -- event/event.sh@49 -- # uname -s 00:06:47.257 15:57:17 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:47.257 15:57:17 -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:47.257 15:57:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:47.257 15:57:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:47.257 15:57:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.257 ************************************ 00:06:47.257 START TEST event_scheduler 00:06:47.257 ************************************ 00:06:47.257 15:57:17 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:47.257 * Looking for test storage... 00:06:47.257 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler 00:06:47.257 15:57:17 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:06:47.257 15:57:17 -- common/autotest_common.sh@1690 -- # lcov --version 00:06:47.257 15:57:17 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:06:47.257 15:57:17 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:06:47.257 15:57:17 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:06:47.257 15:57:17 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:06:47.257 15:57:17 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:06:47.257 15:57:17 -- scripts/common.sh@335 -- # IFS=.-: 00:06:47.257 15:57:17 -- scripts/common.sh@335 -- # read -ra ver1 00:06:47.257 15:57:17 -- scripts/common.sh@336 -- # IFS=.-: 00:06:47.257 15:57:17 -- scripts/common.sh@336 -- # read -ra ver2 00:06:47.257 15:57:17 -- scripts/common.sh@337 -- # local 'op=<' 00:06:47.257 15:57:17 -- scripts/common.sh@339 -- # ver1_l=2 00:06:47.257 15:57:17 -- scripts/common.sh@340 -- # ver2_l=1 00:06:47.257 15:57:17 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:06:47.257 15:57:17 -- scripts/common.sh@343 -- # case "$op" in 00:06:47.257 15:57:17 -- scripts/common.sh@344 -- # : 1 00:06:47.257 15:57:17 -- scripts/common.sh@363 -- # (( v = 0 )) 00:06:47.257 15:57:17 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:47.257 15:57:17 -- scripts/common.sh@364 -- # decimal 1 00:06:47.257 15:57:17 -- scripts/common.sh@352 -- # local d=1 00:06:47.257 15:57:17 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:47.257 15:57:17 -- scripts/common.sh@354 -- # echo 1 00:06:47.257 15:57:17 -- scripts/common.sh@364 -- # ver1[v]=1 00:06:47.257 15:57:17 -- scripts/common.sh@365 -- # decimal 2 00:06:47.257 15:57:17 -- scripts/common.sh@352 -- # local d=2 00:06:47.257 15:57:17 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:47.257 15:57:17 -- scripts/common.sh@354 -- # echo 2 00:06:47.257 15:57:17 -- scripts/common.sh@365 -- # ver2[v]=2 00:06:47.257 15:57:17 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:06:47.257 15:57:17 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:06:47.257 15:57:17 -- scripts/common.sh@367 -- # return 0 00:06:47.257 15:57:17 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:47.257 15:57:17 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:06:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.257 --rc genhtml_branch_coverage=1 00:06:47.257 --rc genhtml_function_coverage=1 00:06:47.257 --rc genhtml_legend=1 00:06:47.257 --rc geninfo_all_blocks=1 00:06:47.257 --rc geninfo_unexecuted_blocks=1 00:06:47.257 00:06:47.257 ' 00:06:47.257 15:57:17 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:06:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.257 --rc genhtml_branch_coverage=1 00:06:47.257 --rc genhtml_function_coverage=1 00:06:47.257 --rc genhtml_legend=1 00:06:47.257 --rc geninfo_all_blocks=1 00:06:47.257 --rc geninfo_unexecuted_blocks=1 00:06:47.257 00:06:47.257 ' 00:06:47.257 15:57:17 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:06:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.257 --rc genhtml_branch_coverage=1 00:06:47.257 --rc genhtml_function_coverage=1 00:06:47.257 --rc genhtml_legend=1 00:06:47.257 --rc geninfo_all_blocks=1 00:06:47.257 --rc geninfo_unexecuted_blocks=1 00:06:47.257 00:06:47.257 ' 00:06:47.257 15:57:17 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:06:47.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:47.257 --rc genhtml_branch_coverage=1 00:06:47.257 --rc genhtml_function_coverage=1 00:06:47.257 --rc genhtml_legend=1 00:06:47.257 --rc geninfo_all_blocks=1 00:06:47.257 --rc geninfo_unexecuted_blocks=1 00:06:47.257 00:06:47.257 ' 00:06:47.257 15:57:17 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:47.257 15:57:17 -- scheduler/scheduler.sh@35 -- # scheduler_pid=1186593 00:06:47.257 15:57:17 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:47.257 15:57:17 -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:47.257 15:57:17 -- scheduler/scheduler.sh@37 -- # waitforlisten 1186593 00:06:47.257 15:57:17 -- common/autotest_common.sh@829 -- # '[' -z 1186593 ']' 00:06:47.257 15:57:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.257 15:57:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.257 15:57:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.257 15:57:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.257 15:57:17 -- common/autotest_common.sh@10 -- # set +x 00:06:47.257 [2024-11-20 15:57:17.947469] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:47.257 [2024-11-20 15:57:17.947531] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1186593 ] 00:06:47.257 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.257 [2024-11-20 15:57:18.029665] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:47.517 [2024-11-20 15:57:18.069236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.517 [2024-11-20 15:57:18.069261] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.517 [2024-11-20 15:57:18.069369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.517 [2024-11-20 15:57:18.069370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:06:48.084 15:57:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.085 15:57:18 -- common/autotest_common.sh@862 -- # return 0 00:06:48.085 15:57:18 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:48.085 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.085 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.085 POWER: Env isn't set yet! 00:06:48.085 POWER: Attempting to initialise ACPI cpufreq power management... 00:06:48.085 POWER: Failed to write /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:48.085 POWER: Cannot set governor of lcore 0 to userspace 00:06:48.085 POWER: Attempting to initialise PSTAT power management... 00:06:48.085 POWER: Power management governor of lcore 0 has been set to 'performance' successfully 00:06:48.085 POWER: Initialized successfully for lcore 0 power management 00:06:48.085 POWER: Power management governor of lcore 1 has been set to 'performance' successfully 00:06:48.085 POWER: Initialized successfully for lcore 1 power management 00:06:48.085 POWER: Power management governor of lcore 2 has been set to 'performance' successfully 00:06:48.085 POWER: Initialized successfully for lcore 2 power management 00:06:48.085 POWER: Power management governor of lcore 3 has been set to 'performance' successfully 00:06:48.085 POWER: Initialized successfully for lcore 3 power management 00:06:48.085 [2024-11-20 15:57:18.809801] scheduler_dynamic.c: 387:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:48.085 [2024-11-20 15:57:18.809821] scheduler_dynamic.c: 389:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:48.085 [2024-11-20 15:57:18.809831] scheduler_dynamic.c: 391:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:48.085 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.085 15:57:18 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:48.085 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.085 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.085 [2024-11-20 15:57:18.873176] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:48.085 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.085 15:57:18 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:48.085 15:57:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:48.085 15:57:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:48.085 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.085 ************************************ 00:06:48.085 START TEST scheduler_create_thread 00:06:48.085 ************************************ 00:06:48.085 15:57:18 -- common/autotest_common.sh@1114 -- # scheduler_create_thread 00:06:48.085 15:57:18 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:48.085 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.085 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.344 2 00:06:48.344 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.344 15:57:18 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:48.344 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.344 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.344 3 00:06:48.344 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.344 15:57:18 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:48.344 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.344 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.344 4 00:06:48.344 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.344 15:57:18 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:48.344 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.344 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.344 5 00:06:48.344 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.344 15:57:18 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:48.344 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.344 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.344 6 00:06:48.344 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.344 15:57:18 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:48.344 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.344 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.345 7 00:06:48.345 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.345 15:57:18 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:48.345 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.345 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.345 8 00:06:48.345 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.345 15:57:18 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:48.345 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.345 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.345 9 00:06:48.345 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.345 15:57:18 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:48.345 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.345 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.345 10 00:06:48.345 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.345 15:57:18 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:48.345 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.345 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:48.345 15:57:18 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.345 15:57:18 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:48.345 15:57:18 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:48.345 15:57:18 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.345 15:57:18 -- common/autotest_common.sh@10 -- # set +x 00:06:49.281 15:57:19 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:49.281 15:57:19 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:49.281 15:57:19 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:49.281 15:57:19 -- common/autotest_common.sh@10 -- # set +x 00:06:50.659 15:57:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:50.659 15:57:21 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:50.659 15:57:21 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:50.659 15:57:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:50.659 15:57:21 -- common/autotest_common.sh@10 -- # set +x 00:06:51.595 15:57:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:51.595 00:06:51.595 real 0m3.382s 00:06:51.595 user 0m0.023s 00:06:51.595 sys 0m0.008s 00:06:51.595 15:57:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:51.595 15:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:51.595 ************************************ 00:06:51.595 END TEST scheduler_create_thread 00:06:51.595 ************************************ 00:06:51.595 15:57:22 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:51.595 15:57:22 -- scheduler/scheduler.sh@46 -- # killprocess 1186593 00:06:51.595 15:57:22 -- common/autotest_common.sh@936 -- # '[' -z 1186593 ']' 00:06:51.595 15:57:22 -- common/autotest_common.sh@940 -- # kill -0 1186593 00:06:51.595 15:57:22 -- common/autotest_common.sh@941 -- # uname 00:06:51.595 15:57:22 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:06:51.595 15:57:22 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1186593 00:06:51.595 15:57:22 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:06:51.595 15:57:22 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:06:51.595 15:57:22 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1186593' 00:06:51.595 killing process with pid 1186593 00:06:51.595 15:57:22 -- common/autotest_common.sh@955 -- # kill 1186593 00:06:51.595 15:57:22 -- common/autotest_common.sh@960 -- # wait 1186593 00:06:51.859 [2024-11-20 15:57:22.645096] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:52.118 POWER: Power management governor of lcore 0 has been set to 'powersave' successfully 00:06:52.118 POWER: Power management of lcore 0 has exited from 'performance' mode and been set back to the original 00:06:52.118 POWER: Power management governor of lcore 1 has been set to 'powersave' successfully 00:06:52.118 POWER: Power management of lcore 1 has exited from 'performance' mode and been set back to the original 00:06:52.118 POWER: Power management governor of lcore 2 has been set to 'powersave' successfully 00:06:52.118 POWER: Power management of lcore 2 has exited from 'performance' mode and been set back to the original 00:06:52.118 POWER: Power management governor of lcore 3 has been set to 'powersave' successfully 00:06:52.118 POWER: Power management of lcore 3 has exited from 'performance' mode and been set back to the original 00:06:52.118 00:06:52.118 real 0m5.150s 00:06:52.118 user 0m10.550s 00:06:52.118 sys 0m0.451s 00:06:52.118 15:57:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:06:52.118 15:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.118 ************************************ 00:06:52.118 END TEST event_scheduler 00:06:52.118 ************************************ 00:06:52.118 15:57:22 -- event/event.sh@51 -- # modprobe -n nbd 00:06:52.118 15:57:22 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:52.118 15:57:22 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:06:52.118 15:57:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:06:52.118 15:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.118 ************************************ 00:06:52.118 START TEST app_repeat 00:06:52.118 ************************************ 00:06:52.118 15:57:22 -- common/autotest_common.sh@1114 -- # app_repeat_test 00:06:52.118 15:57:22 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:52.118 15:57:22 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:52.118 15:57:22 -- event/event.sh@13 -- # local nbd_list 00:06:52.118 15:57:22 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:52.118 15:57:22 -- event/event.sh@14 -- # local bdev_list 00:06:52.118 15:57:22 -- event/event.sh@15 -- # local repeat_times=4 00:06:52.118 15:57:22 -- event/event.sh@17 -- # modprobe nbd 00:06:52.378 15:57:22 -- event/event.sh@19 -- # repeat_pid=1187672 00:06:52.378 15:57:22 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:52.378 15:57:22 -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:52.378 15:57:22 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1187672' 00:06:52.378 Process app_repeat pid: 1187672 00:06:52.378 15:57:22 -- event/event.sh@23 -- # for i in {0..2} 00:06:52.378 15:57:22 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:52.378 spdk_app_start Round 0 00:06:52.378 15:57:22 -- event/event.sh@25 -- # waitforlisten 1187672 /var/tmp/spdk-nbd.sock 00:06:52.378 15:57:22 -- common/autotest_common.sh@829 -- # '[' -z 1187672 ']' 00:06:52.378 15:57:22 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.378 15:57:22 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:52.378 15:57:22 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.378 15:57:22 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:52.378 15:57:22 -- common/autotest_common.sh@10 -- # set +x 00:06:52.378 [2024-11-20 15:57:22.952312] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:06:52.378 [2024-11-20 15:57:22.952378] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1187672 ] 00:06:52.378 EAL: No free 2048 kB hugepages reported on node 1 00:06:52.378 [2024-11-20 15:57:23.023951] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:52.378 [2024-11-20 15:57:23.065535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:52.378 [2024-11-20 15:57:23.065540] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.317 15:57:23 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:53.317 15:57:23 -- common/autotest_common.sh@862 -- # return 0 00:06:53.317 15:57:23 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.317 Malloc0 00:06:53.317 15:57:23 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.577 Malloc1 00:06:53.577 15:57:24 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@12 -- # local i 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.577 /dev/nbd0 00:06:53.577 15:57:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.836 15:57:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:53.836 15:57:24 -- common/autotest_common.sh@867 -- # local i 00:06:53.836 15:57:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:53.836 15:57:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:53.836 15:57:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:53.836 15:57:24 -- common/autotest_common.sh@871 -- # break 00:06:53.836 15:57:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:53.836 15:57:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:53.836 15:57:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.836 1+0 records in 00:06:53.836 1+0 records out 00:06:53.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230261 s, 17.8 MB/s 00:06:53.836 15:57:24 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:53.836 15:57:24 -- common/autotest_common.sh@884 -- # size=4096 00:06:53.836 15:57:24 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:53.836 15:57:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:53.836 15:57:24 -- common/autotest_common.sh@887 -- # return 0 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:53.836 /dev/nbd1 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:53.836 15:57:24 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:53.836 15:57:24 -- common/autotest_common.sh@867 -- # local i 00:06:53.836 15:57:24 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:53.836 15:57:24 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:53.836 15:57:24 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:53.836 15:57:24 -- common/autotest_common.sh@871 -- # break 00:06:53.836 15:57:24 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:53.836 15:57:24 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:53.836 15:57:24 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.836 1+0 records in 00:06:53.836 1+0 records out 00:06:53.836 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000126015 s, 32.5 MB/s 00:06:53.836 15:57:24 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:53.836 15:57:24 -- common/autotest_common.sh@884 -- # size=4096 00:06:53.836 15:57:24 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:53.836 15:57:24 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:53.836 15:57:24 -- common/autotest_common.sh@887 -- # return 0 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.836 15:57:24 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.095 15:57:24 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.095 { 00:06:54.095 "nbd_device": "/dev/nbd0", 00:06:54.095 "bdev_name": "Malloc0" 00:06:54.095 }, 00:06:54.095 { 00:06:54.095 "nbd_device": "/dev/nbd1", 00:06:54.095 "bdev_name": "Malloc1" 00:06:54.095 } 00:06:54.095 ]' 00:06:54.095 15:57:24 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.095 { 00:06:54.095 "nbd_device": "/dev/nbd0", 00:06:54.095 "bdev_name": "Malloc0" 00:06:54.095 }, 00:06:54.095 { 00:06:54.095 "nbd_device": "/dev/nbd1", 00:06:54.095 "bdev_name": "Malloc1" 00:06:54.095 } 00:06:54.095 ]' 00:06:54.095 15:57:24 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.095 15:57:24 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.095 /dev/nbd1' 00:06:54.095 15:57:24 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.095 /dev/nbd1' 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.096 256+0 records in 00:06:54.096 256+0 records out 00:06:54.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011546 s, 90.8 MB/s 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.096 256+0 records in 00:06:54.096 256+0 records out 00:06:54.096 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151391 s, 69.3 MB/s 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.096 15:57:24 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.355 256+0 records in 00:06:54.355 256+0 records out 00:06:54.355 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0191153 s, 54.9 MB/s 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@51 -- # local i 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.355 15:57:24 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@41 -- # break 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.355 15:57:25 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@41 -- # break 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.614 15:57:25 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@65 -- # true 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@65 -- # count=0 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@104 -- # count=0 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:54.874 15:57:25 -- bdev/nbd_common.sh@109 -- # return 0 00:06:54.874 15:57:25 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:55.134 15:57:25 -- event/event.sh@35 -- # sleep 3 00:06:55.134 [2024-11-20 15:57:25.925163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:55.394 [2024-11-20 15:57:25.958348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:06:55.394 [2024-11-20 15:57:25.958351] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.394 [2024-11-20 15:57:25.999285] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:55.394 [2024-11-20 15:57:25.999326] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:58.680 15:57:28 -- event/event.sh@23 -- # for i in {0..2} 00:06:58.680 15:57:28 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:58.680 spdk_app_start Round 1 00:06:58.680 15:57:28 -- event/event.sh@25 -- # waitforlisten 1187672 /var/tmp/spdk-nbd.sock 00:06:58.680 15:57:28 -- common/autotest_common.sh@829 -- # '[' -z 1187672 ']' 00:06:58.680 15:57:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:58.680 15:57:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:58.680 15:57:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:58.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:58.680 15:57:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:58.680 15:57:28 -- common/autotest_common.sh@10 -- # set +x 00:06:58.680 15:57:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:58.680 15:57:28 -- common/autotest_common.sh@862 -- # return 0 00:06:58.680 15:57:28 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.680 Malloc0 00:06:58.680 15:57:29 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:58.680 Malloc1 00:06:58.680 15:57:29 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@12 -- # local i 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.680 15:57:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:58.680 /dev/nbd0 00:06:58.939 15:57:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:58.939 15:57:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:58.939 15:57:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:58.939 15:57:29 -- common/autotest_common.sh@867 -- # local i 00:06:58.939 15:57:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:58.939 15:57:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:58.939 15:57:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:58.939 15:57:29 -- common/autotest_common.sh@871 -- # break 00:06:58.939 15:57:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:58.939 15:57:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:58.939 15:57:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.939 1+0 records in 00:06:58.939 1+0 records out 00:06:58.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000244462 s, 16.8 MB/s 00:06:58.939 15:57:29 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:58.939 15:57:29 -- common/autotest_common.sh@884 -- # size=4096 00:06:58.939 15:57:29 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:58.939 15:57:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:58.939 15:57:29 -- common/autotest_common.sh@887 -- # return 0 00:06:58.939 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.939 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.939 15:57:29 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:58.939 /dev/nbd1 00:06:58.939 15:57:29 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:58.939 15:57:29 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:58.939 15:57:29 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:58.939 15:57:29 -- common/autotest_common.sh@867 -- # local i 00:06:58.939 15:57:29 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:58.939 15:57:29 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:58.939 15:57:29 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:58.939 15:57:29 -- common/autotest_common.sh@871 -- # break 00:06:58.939 15:57:29 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:58.939 15:57:29 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:58.939 15:57:29 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:58.939 1+0 records in 00:06:58.939 1+0 records out 00:06:58.939 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000242631 s, 16.9 MB/s 00:06:58.939 15:57:29 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:58.939 15:57:29 -- common/autotest_common.sh@884 -- # size=4096 00:06:58.940 15:57:29 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:06:58.940 15:57:29 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:58.940 15:57:29 -- common/autotest_common.sh@887 -- # return 0 00:06:58.940 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:58.940 15:57:29 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:58.940 15:57:29 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:58.940 15:57:29 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:58.940 15:57:29 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:59.199 { 00:06:59.199 "nbd_device": "/dev/nbd0", 00:06:59.199 "bdev_name": "Malloc0" 00:06:59.199 }, 00:06:59.199 { 00:06:59.199 "nbd_device": "/dev/nbd1", 00:06:59.199 "bdev_name": "Malloc1" 00:06:59.199 } 00:06:59.199 ]' 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:59.199 { 00:06:59.199 "nbd_device": "/dev/nbd0", 00:06:59.199 "bdev_name": "Malloc0" 00:06:59.199 }, 00:06:59.199 { 00:06:59.199 "nbd_device": "/dev/nbd1", 00:06:59.199 "bdev_name": "Malloc1" 00:06:59.199 } 00:06:59.199 ]' 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:59.199 /dev/nbd1' 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:59.199 /dev/nbd1' 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@65 -- # count=2 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@66 -- # echo 2 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@95 -- # count=2 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:59.199 256+0 records in 00:06:59.199 256+0 records out 00:06:59.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00461432 s, 227 MB/s 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:59.199 256+0 records in 00:06:59.199 256+0 records out 00:06:59.199 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0193889 s, 54.1 MB/s 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:59.199 15:57:29 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:59.458 256+0 records in 00:06:59.458 256+0 records out 00:06:59.458 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0161486 s, 64.9 MB/s 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@51 -- # local i 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@41 -- # break 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:59.458 15:57:30 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@41 -- # break 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@45 -- # return 0 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:59.717 15:57:30 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@65 -- # echo '' 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@65 -- # true 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@65 -- # count=0 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@66 -- # echo 0 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@104 -- # count=0 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:59.974 15:57:30 -- bdev/nbd_common.sh@109 -- # return 0 00:06:59.974 15:57:30 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:00.233 15:57:30 -- event/event.sh@35 -- # sleep 3 00:07:00.233 [2024-11-20 15:57:31.023654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.493 [2024-11-20 15:57:31.056823] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.493 [2024-11-20 15:57:31.056825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.493 [2024-11-20 15:57:31.097923] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:00.493 [2024-11-20 15:57:31.097962] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:03.781 15:57:33 -- event/event.sh@23 -- # for i in {0..2} 00:07:03.781 15:57:33 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:03.781 spdk_app_start Round 2 00:07:03.781 15:57:33 -- event/event.sh@25 -- # waitforlisten 1187672 /var/tmp/spdk-nbd.sock 00:07:03.781 15:57:33 -- common/autotest_common.sh@829 -- # '[' -z 1187672 ']' 00:07:03.781 15:57:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:03.781 15:57:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:03.781 15:57:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:03.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:03.781 15:57:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:03.781 15:57:33 -- common/autotest_common.sh@10 -- # set +x 00:07:03.781 15:57:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:03.781 15:57:34 -- common/autotest_common.sh@862 -- # return 0 00:07:03.781 15:57:34 -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.781 Malloc0 00:07:03.781 15:57:34 -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:03.781 Malloc1 00:07:03.781 15:57:34 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.781 15:57:34 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.781 15:57:34 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.781 15:57:34 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@12 -- # local i 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:03.782 15:57:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:03.782 /dev/nbd0 00:07:04.041 15:57:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:04.041 15:57:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:04.041 15:57:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:07:04.041 15:57:34 -- common/autotest_common.sh@867 -- # local i 00:07:04.041 15:57:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.041 15:57:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.041 15:57:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:07:04.041 15:57:34 -- common/autotest_common.sh@871 -- # break 00:07:04.041 15:57:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.041 15:57:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.041 15:57:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.041 1+0 records in 00:07:04.041 1+0 records out 00:07:04.041 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222431 s, 18.4 MB/s 00:07:04.041 15:57:34 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:04.041 15:57:34 -- common/autotest_common.sh@884 -- # size=4096 00:07:04.041 15:57:34 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:04.041 15:57:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.041 15:57:34 -- common/autotest_common.sh@887 -- # return 0 00:07:04.041 15:57:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.041 15:57:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.041 15:57:34 -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:04.041 /dev/nbd1 00:07:04.041 15:57:34 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:04.041 15:57:34 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:04.041 15:57:34 -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:07:04.041 15:57:34 -- common/autotest_common.sh@867 -- # local i 00:07:04.042 15:57:34 -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:07:04.042 15:57:34 -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:07:04.042 15:57:34 -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:07:04.042 15:57:34 -- common/autotest_common.sh@871 -- # break 00:07:04.042 15:57:34 -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:07:04.042 15:57:34 -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:07:04.042 15:57:34 -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:04.042 1+0 records in 00:07:04.042 1+0 records out 00:07:04.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230878 s, 17.7 MB/s 00:07:04.042 15:57:34 -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:04.042 15:57:34 -- common/autotest_common.sh@884 -- # size=4096 00:07:04.042 15:57:34 -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdtest 00:07:04.042 15:57:34 -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:07:04.042 15:57:34 -- common/autotest_common.sh@887 -- # return 0 00:07:04.042 15:57:34 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:04.042 15:57:34 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:04.042 15:57:34 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.042 15:57:34 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.042 15:57:34 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:04.300 15:57:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:04.301 { 00:07:04.301 "nbd_device": "/dev/nbd0", 00:07:04.301 "bdev_name": "Malloc0" 00:07:04.301 }, 00:07:04.301 { 00:07:04.301 "nbd_device": "/dev/nbd1", 00:07:04.301 "bdev_name": "Malloc1" 00:07:04.301 } 00:07:04.301 ]' 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:04.301 { 00:07:04.301 "nbd_device": "/dev/nbd0", 00:07:04.301 "bdev_name": "Malloc0" 00:07:04.301 }, 00:07:04.301 { 00:07:04.301 "nbd_device": "/dev/nbd1", 00:07:04.301 "bdev_name": "Malloc1" 00:07:04.301 } 00:07:04.301 ]' 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:04.301 /dev/nbd1' 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:04.301 /dev/nbd1' 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@65 -- # count=2 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@66 -- # echo 2 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@95 -- # count=2 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:04.301 256+0 records in 00:07:04.301 256+0 records out 00:07:04.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106456 s, 98.5 MB/s 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:04.301 256+0 records in 00:07:04.301 256+0 records out 00:07:04.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147138 s, 71.3 MB/s 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:04.301 15:57:35 -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:04.560 256+0 records in 00:07:04.560 256+0 records out 00:07:04.560 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0208356 s, 50.3 MB/s 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/nbdrandtest 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@51 -- # local i 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@41 -- # break 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:04.560 15:57:35 -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@41 -- # break 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@45 -- # return 0 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:04.819 15:57:35 -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@65 -- # true 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@65 -- # count=0 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@104 -- # count=0 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:05.078 15:57:35 -- bdev/nbd_common.sh@109 -- # return 0 00:07:05.078 15:57:35 -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:05.337 15:57:35 -- event/event.sh@35 -- # sleep 3 00:07:05.337 [2024-11-20 15:57:36.111797] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:05.596 [2024-11-20 15:57:36.144871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:05.596 [2024-11-20 15:57:36.144874] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.596 [2024-11-20 15:57:36.186042] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:05.596 [2024-11-20 15:57:36.186095] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:08.885 15:57:38 -- event/event.sh@38 -- # waitforlisten 1187672 /var/tmp/spdk-nbd.sock 00:07:08.885 15:57:38 -- common/autotest_common.sh@829 -- # '[' -z 1187672 ']' 00:07:08.885 15:57:38 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:08.885 15:57:38 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.885 15:57:38 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:08.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:08.885 15:57:38 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.885 15:57:38 -- common/autotest_common.sh@10 -- # set +x 00:07:08.885 15:57:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:08.885 15:57:39 -- common/autotest_common.sh@862 -- # return 0 00:07:08.885 15:57:39 -- event/event.sh@39 -- # killprocess 1187672 00:07:08.885 15:57:39 -- common/autotest_common.sh@936 -- # '[' -z 1187672 ']' 00:07:08.885 15:57:39 -- common/autotest_common.sh@940 -- # kill -0 1187672 00:07:08.885 15:57:39 -- common/autotest_common.sh@941 -- # uname 00:07:08.885 15:57:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:08.885 15:57:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1187672 00:07:08.885 15:57:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:08.885 15:57:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:08.885 15:57:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1187672' 00:07:08.885 killing process with pid 1187672 00:07:08.885 15:57:39 -- common/autotest_common.sh@955 -- # kill 1187672 00:07:08.885 15:57:39 -- common/autotest_common.sh@960 -- # wait 1187672 00:07:08.885 spdk_app_start is called in Round 0. 00:07:08.885 Shutdown signal received, stop current app iteration 00:07:08.885 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:07:08.885 spdk_app_start is called in Round 1. 00:07:08.885 Shutdown signal received, stop current app iteration 00:07:08.885 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:07:08.885 spdk_app_start is called in Round 2. 00:07:08.885 Shutdown signal received, stop current app iteration 00:07:08.885 Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 reinitialization... 00:07:08.885 spdk_app_start is called in Round 3. 00:07:08.885 Shutdown signal received, stop current app iteration 00:07:08.885 15:57:39 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:08.885 15:57:39 -- event/event.sh@42 -- # return 0 00:07:08.885 00:07:08.885 real 0m16.424s 00:07:08.885 user 0m35.354s 00:07:08.885 sys 0m2.898s 00:07:08.885 15:57:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:08.885 15:57:39 -- common/autotest_common.sh@10 -- # set +x 00:07:08.885 ************************************ 00:07:08.885 END TEST app_repeat 00:07:08.885 ************************************ 00:07:08.885 15:57:39 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:08.885 15:57:39 -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:08.885 15:57:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.885 15:57:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.885 15:57:39 -- common/autotest_common.sh@10 -- # set +x 00:07:08.885 ************************************ 00:07:08.885 START TEST cpu_locks 00:07:08.885 ************************************ 00:07:08.885 15:57:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:08.885 * Looking for test storage... 00:07:08.885 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/event 00:07:08.885 15:57:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:08.885 15:57:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:08.885 15:57:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:08.885 15:57:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:08.885 15:57:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:08.885 15:57:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:08.885 15:57:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:08.885 15:57:39 -- scripts/common.sh@335 -- # IFS=.-: 00:07:08.885 15:57:39 -- scripts/common.sh@335 -- # read -ra ver1 00:07:08.885 15:57:39 -- scripts/common.sh@336 -- # IFS=.-: 00:07:08.885 15:57:39 -- scripts/common.sh@336 -- # read -ra ver2 00:07:08.885 15:57:39 -- scripts/common.sh@337 -- # local 'op=<' 00:07:08.885 15:57:39 -- scripts/common.sh@339 -- # ver1_l=2 00:07:08.885 15:57:39 -- scripts/common.sh@340 -- # ver2_l=1 00:07:08.885 15:57:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:08.885 15:57:39 -- scripts/common.sh@343 -- # case "$op" in 00:07:08.885 15:57:39 -- scripts/common.sh@344 -- # : 1 00:07:08.885 15:57:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:08.885 15:57:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:08.885 15:57:39 -- scripts/common.sh@364 -- # decimal 1 00:07:08.885 15:57:39 -- scripts/common.sh@352 -- # local d=1 00:07:08.885 15:57:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:08.885 15:57:39 -- scripts/common.sh@354 -- # echo 1 00:07:08.885 15:57:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:08.885 15:57:39 -- scripts/common.sh@365 -- # decimal 2 00:07:08.885 15:57:39 -- scripts/common.sh@352 -- # local d=2 00:07:08.885 15:57:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:08.885 15:57:39 -- scripts/common.sh@354 -- # echo 2 00:07:08.885 15:57:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:08.885 15:57:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:08.885 15:57:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:08.885 15:57:39 -- scripts/common.sh@367 -- # return 0 00:07:08.885 15:57:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:08.885 15:57:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:08.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.885 --rc genhtml_branch_coverage=1 00:07:08.885 --rc genhtml_function_coverage=1 00:07:08.885 --rc genhtml_legend=1 00:07:08.885 --rc geninfo_all_blocks=1 00:07:08.885 --rc geninfo_unexecuted_blocks=1 00:07:08.885 00:07:08.885 ' 00:07:08.885 15:57:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:08.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.885 --rc genhtml_branch_coverage=1 00:07:08.885 --rc genhtml_function_coverage=1 00:07:08.885 --rc genhtml_legend=1 00:07:08.885 --rc geninfo_all_blocks=1 00:07:08.885 --rc geninfo_unexecuted_blocks=1 00:07:08.885 00:07:08.885 ' 00:07:08.885 15:57:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:08.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.885 --rc genhtml_branch_coverage=1 00:07:08.885 --rc genhtml_function_coverage=1 00:07:08.885 --rc genhtml_legend=1 00:07:08.885 --rc geninfo_all_blocks=1 00:07:08.885 --rc geninfo_unexecuted_blocks=1 00:07:08.885 00:07:08.885 ' 00:07:08.885 15:57:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:08.885 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:08.885 --rc genhtml_branch_coverage=1 00:07:08.885 --rc genhtml_function_coverage=1 00:07:08.885 --rc genhtml_legend=1 00:07:08.885 --rc geninfo_all_blocks=1 00:07:08.885 --rc geninfo_unexecuted_blocks=1 00:07:08.885 00:07:08.885 ' 00:07:08.885 15:57:39 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:08.885 15:57:39 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:08.885 15:57:39 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:08.885 15:57:39 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:08.885 15:57:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:08.885 15:57:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:08.885 15:57:39 -- common/autotest_common.sh@10 -- # set +x 00:07:08.885 ************************************ 00:07:08.885 START TEST default_locks 00:07:08.885 ************************************ 00:07:08.885 15:57:39 -- common/autotest_common.sh@1114 -- # default_locks 00:07:08.885 15:57:39 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1190771 00:07:08.885 15:57:39 -- event/cpu_locks.sh@47 -- # waitforlisten 1190771 00:07:08.885 15:57:39 -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:08.885 15:57:39 -- common/autotest_common.sh@829 -- # '[' -z 1190771 ']' 00:07:08.885 15:57:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.885 15:57:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.885 15:57:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.885 15:57:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.885 15:57:39 -- common/autotest_common.sh@10 -- # set +x 00:07:08.885 [2024-11-20 15:57:39.628664] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:08.885 [2024-11-20 15:57:39.628720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1190771 ] 00:07:08.885 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.145 [2024-11-20 15:57:39.698058] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.145 [2024-11-20 15:57:39.735138] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:09.145 [2024-11-20 15:57:39.735254] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.712 15:57:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.712 15:57:40 -- common/autotest_common.sh@862 -- # return 0 00:07:09.712 15:57:40 -- event/cpu_locks.sh@49 -- # locks_exist 1190771 00:07:09.712 15:57:40 -- event/cpu_locks.sh@22 -- # lslocks -p 1190771 00:07:09.712 15:57:40 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.970 lslocks: write error 00:07:09.970 15:57:40 -- event/cpu_locks.sh@50 -- # killprocess 1190771 00:07:09.970 15:57:40 -- common/autotest_common.sh@936 -- # '[' -z 1190771 ']' 00:07:09.970 15:57:40 -- common/autotest_common.sh@940 -- # kill -0 1190771 00:07:09.970 15:57:40 -- common/autotest_common.sh@941 -- # uname 00:07:09.970 15:57:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:09.970 15:57:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1190771 00:07:10.229 15:57:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:10.229 15:57:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:10.229 15:57:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1190771' 00:07:10.229 killing process with pid 1190771 00:07:10.229 15:57:40 -- common/autotest_common.sh@955 -- # kill 1190771 00:07:10.229 15:57:40 -- common/autotest_common.sh@960 -- # wait 1190771 00:07:10.488 15:57:41 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1190771 00:07:10.488 15:57:41 -- common/autotest_common.sh@650 -- # local es=0 00:07:10.488 15:57:41 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1190771 00:07:10.488 15:57:41 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:10.488 15:57:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.488 15:57:41 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:10.488 15:57:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:10.488 15:57:41 -- common/autotest_common.sh@653 -- # waitforlisten 1190771 00:07:10.489 15:57:41 -- common/autotest_common.sh@829 -- # '[' -z 1190771 ']' 00:07:10.489 15:57:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.489 15:57:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.489 15:57:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.489 15:57:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.489 15:57:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.489 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1190771) - No such process 00:07:10.489 ERROR: process (pid: 1190771) is no longer running 00:07:10.489 15:57:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:10.489 15:57:41 -- common/autotest_common.sh@862 -- # return 1 00:07:10.489 15:57:41 -- common/autotest_common.sh@653 -- # es=1 00:07:10.489 15:57:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:10.489 15:57:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:10.489 15:57:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:10.489 15:57:41 -- event/cpu_locks.sh@54 -- # no_locks 00:07:10.489 15:57:41 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:10.489 15:57:41 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:10.489 15:57:41 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:10.489 00:07:10.489 real 0m1.509s 00:07:10.489 user 0m1.595s 00:07:10.489 sys 0m0.530s 00:07:10.489 15:57:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:10.489 15:57:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.489 ************************************ 00:07:10.489 END TEST default_locks 00:07:10.489 ************************************ 00:07:10.489 15:57:41 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:10.489 15:57:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:10.489 15:57:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:10.489 15:57:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.489 ************************************ 00:07:10.489 START TEST default_locks_via_rpc 00:07:10.489 ************************************ 00:07:10.489 15:57:41 -- common/autotest_common.sh@1114 -- # default_locks_via_rpc 00:07:10.489 15:57:41 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1191075 00:07:10.489 15:57:41 -- event/cpu_locks.sh@63 -- # waitforlisten 1191075 00:07:10.489 15:57:41 -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:10.489 15:57:41 -- common/autotest_common.sh@829 -- # '[' -z 1191075 ']' 00:07:10.489 15:57:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.489 15:57:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.489 15:57:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.489 15:57:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.489 15:57:41 -- common/autotest_common.sh@10 -- # set +x 00:07:10.489 [2024-11-20 15:57:41.185910] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:10.489 [2024-11-20 15:57:41.185967] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191075 ] 00:07:10.489 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.489 [2024-11-20 15:57:41.258542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.748 [2024-11-20 15:57:41.294719] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:10.748 [2024-11-20 15:57:41.294841] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.315 15:57:41 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.315 15:57:41 -- common/autotest_common.sh@862 -- # return 0 00:07:11.315 15:57:41 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:11.315 15:57:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.315 15:57:41 -- common/autotest_common.sh@10 -- # set +x 00:07:11.315 15:57:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.315 15:57:42 -- event/cpu_locks.sh@67 -- # no_locks 00:07:11.315 15:57:42 -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.315 15:57:42 -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.315 15:57:42 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.315 15:57:42 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:11.315 15:57:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:11.315 15:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:11.315 15:57:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.315 15:57:42 -- event/cpu_locks.sh@71 -- # locks_exist 1191075 00:07:11.316 15:57:42 -- event/cpu_locks.sh@22 -- # lslocks -p 1191075 00:07:11.316 15:57:42 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:11.574 15:57:42 -- event/cpu_locks.sh@73 -- # killprocess 1191075 00:07:11.574 15:57:42 -- common/autotest_common.sh@936 -- # '[' -z 1191075 ']' 00:07:11.574 15:57:42 -- common/autotest_common.sh@940 -- # kill -0 1191075 00:07:11.574 15:57:42 -- common/autotest_common.sh@941 -- # uname 00:07:11.574 15:57:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:11.574 15:57:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1191075 00:07:11.574 15:57:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:11.574 15:57:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:11.574 15:57:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1191075' 00:07:11.574 killing process with pid 1191075 00:07:11.574 15:57:42 -- common/autotest_common.sh@955 -- # kill 1191075 00:07:11.574 15:57:42 -- common/autotest_common.sh@960 -- # wait 1191075 00:07:12.141 00:07:12.141 real 0m1.526s 00:07:12.141 user 0m1.594s 00:07:12.141 sys 0m0.524s 00:07:12.141 15:57:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:12.141 15:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.141 ************************************ 00:07:12.141 END TEST default_locks_via_rpc 00:07:12.141 ************************************ 00:07:12.141 15:57:42 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:12.141 15:57:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:12.141 15:57:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:12.141 15:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.141 ************************************ 00:07:12.141 START TEST non_locking_app_on_locked_coremask 00:07:12.141 ************************************ 00:07:12.141 15:57:42 -- common/autotest_common.sh@1114 -- # non_locking_app_on_locked_coremask 00:07:12.141 15:57:42 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1191422 00:07:12.141 15:57:42 -- event/cpu_locks.sh@81 -- # waitforlisten 1191422 /var/tmp/spdk.sock 00:07:12.141 15:57:42 -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:12.141 15:57:42 -- common/autotest_common.sh@829 -- # '[' -z 1191422 ']' 00:07:12.141 15:57:42 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:12.141 15:57:42 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:12.141 15:57:42 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:12.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:12.141 15:57:42 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:12.141 15:57:42 -- common/autotest_common.sh@10 -- # set +x 00:07:12.141 [2024-11-20 15:57:42.765033] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:12.142 [2024-11-20 15:57:42.765088] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191422 ] 00:07:12.142 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.142 [2024-11-20 15:57:42.834217] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.142 [2024-11-20 15:57:42.871185] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:12.142 [2024-11-20 15:57:42.871302] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.079 15:57:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.079 15:57:43 -- common/autotest_common.sh@862 -- # return 0 00:07:13.079 15:57:43 -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:13.079 15:57:43 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1191503 00:07:13.079 15:57:43 -- event/cpu_locks.sh@85 -- # waitforlisten 1191503 /var/tmp/spdk2.sock 00:07:13.079 15:57:43 -- common/autotest_common.sh@829 -- # '[' -z 1191503 ']' 00:07:13.079 15:57:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:13.079 15:57:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:13.079 15:57:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:13.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:13.079 15:57:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:13.079 15:57:43 -- common/autotest_common.sh@10 -- # set +x 00:07:13.079 [2024-11-20 15:57:43.600703] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:13.079 [2024-11-20 15:57:43.600758] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1191503 ] 00:07:13.079 EAL: No free 2048 kB hugepages reported on node 1 00:07:13.079 [2024-11-20 15:57:43.695451] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:13.079 [2024-11-20 15:57:43.695480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:13.079 [2024-11-20 15:57:43.772934] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:13.079 [2024-11-20 15:57:43.773051] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.648 15:57:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:13.648 15:57:44 -- common/autotest_common.sh@862 -- # return 0 00:07:13.648 15:57:44 -- event/cpu_locks.sh@87 -- # locks_exist 1191422 00:07:13.648 15:57:44 -- event/cpu_locks.sh@22 -- # lslocks -p 1191422 00:07:13.648 15:57:44 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:15.026 lslocks: write error 00:07:15.026 15:57:45 -- event/cpu_locks.sh@89 -- # killprocess 1191422 00:07:15.026 15:57:45 -- common/autotest_common.sh@936 -- # '[' -z 1191422 ']' 00:07:15.026 15:57:45 -- common/autotest_common.sh@940 -- # kill -0 1191422 00:07:15.026 15:57:45 -- common/autotest_common.sh@941 -- # uname 00:07:15.026 15:57:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.026 15:57:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1191422 00:07:15.026 15:57:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.026 15:57:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.026 15:57:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1191422' 00:07:15.026 killing process with pid 1191422 00:07:15.026 15:57:45 -- common/autotest_common.sh@955 -- # kill 1191422 00:07:15.026 15:57:45 -- common/autotest_common.sh@960 -- # wait 1191422 00:07:15.595 15:57:46 -- event/cpu_locks.sh@90 -- # killprocess 1191503 00:07:15.595 15:57:46 -- common/autotest_common.sh@936 -- # '[' -z 1191503 ']' 00:07:15.595 15:57:46 -- common/autotest_common.sh@940 -- # kill -0 1191503 00:07:15.595 15:57:46 -- common/autotest_common.sh@941 -- # uname 00:07:15.595 15:57:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:15.595 15:57:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1191503 00:07:15.855 15:57:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:15.855 15:57:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:15.855 15:57:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1191503' 00:07:15.855 killing process with pid 1191503 00:07:15.855 15:57:46 -- common/autotest_common.sh@955 -- # kill 1191503 00:07:15.855 15:57:46 -- common/autotest_common.sh@960 -- # wait 1191503 00:07:16.115 00:07:16.115 real 0m3.992s 00:07:16.115 user 0m4.280s 00:07:16.115 sys 0m1.401s 00:07:16.115 15:57:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:16.115 15:57:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.115 ************************************ 00:07:16.115 END TEST non_locking_app_on_locked_coremask 00:07:16.115 ************************************ 00:07:16.115 15:57:46 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:16.115 15:57:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:16.115 15:57:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:16.115 15:57:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.115 ************************************ 00:07:16.115 START TEST locking_app_on_unlocked_coremask 00:07:16.115 ************************************ 00:07:16.115 15:57:46 -- common/autotest_common.sh@1114 -- # locking_app_on_unlocked_coremask 00:07:16.115 15:57:46 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1192077 00:07:16.115 15:57:46 -- event/cpu_locks.sh@99 -- # waitforlisten 1192077 /var/tmp/spdk.sock 00:07:16.115 15:57:46 -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:16.115 15:57:46 -- common/autotest_common.sh@829 -- # '[' -z 1192077 ']' 00:07:16.115 15:57:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.115 15:57:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:16.115 15:57:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.115 15:57:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:16.115 15:57:46 -- common/autotest_common.sh@10 -- # set +x 00:07:16.115 [2024-11-20 15:57:46.810294] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:16.115 [2024-11-20 15:57:46.810351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192077 ] 00:07:16.115 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.115 [2024-11-20 15:57:46.878307] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:16.115 [2024-11-20 15:57:46.878338] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.115 [2024-11-20 15:57:46.910262] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:16.115 [2024-11-20 15:57:46.910383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.052 15:57:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.052 15:57:47 -- common/autotest_common.sh@862 -- # return 0 00:07:17.052 15:57:47 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1192346 00:07:17.052 15:57:47 -- event/cpu_locks.sh@103 -- # waitforlisten 1192346 /var/tmp/spdk2.sock 00:07:17.052 15:57:47 -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:17.052 15:57:47 -- common/autotest_common.sh@829 -- # '[' -z 1192346 ']' 00:07:17.052 15:57:47 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.052 15:57:47 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:17.052 15:57:47 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.052 15:57:47 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:17.052 15:57:47 -- common/autotest_common.sh@10 -- # set +x 00:07:17.052 [2024-11-20 15:57:47.661760] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:17.052 [2024-11-20 15:57:47.661815] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192346 ] 00:07:17.052 EAL: No free 2048 kB hugepages reported on node 1 00:07:17.052 [2024-11-20 15:57:47.755520] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.052 [2024-11-20 15:57:47.827713] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:17.052 [2024-11-20 15:57:47.827858] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.989 15:57:48 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:17.989 15:57:48 -- common/autotest_common.sh@862 -- # return 0 00:07:17.990 15:57:48 -- event/cpu_locks.sh@105 -- # locks_exist 1192346 00:07:17.990 15:57:48 -- event/cpu_locks.sh@22 -- # lslocks -p 1192346 00:07:17.990 15:57:48 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:18.926 lslocks: write error 00:07:18.926 15:57:49 -- event/cpu_locks.sh@107 -- # killprocess 1192077 00:07:18.926 15:57:49 -- common/autotest_common.sh@936 -- # '[' -z 1192077 ']' 00:07:18.926 15:57:49 -- common/autotest_common.sh@940 -- # kill -0 1192077 00:07:18.926 15:57:49 -- common/autotest_common.sh@941 -- # uname 00:07:18.926 15:57:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:18.926 15:57:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192077 00:07:18.926 15:57:49 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:18.926 15:57:49 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:18.926 15:57:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192077' 00:07:18.926 killing process with pid 1192077 00:07:18.926 15:57:49 -- common/autotest_common.sh@955 -- # kill 1192077 00:07:18.926 15:57:49 -- common/autotest_common.sh@960 -- # wait 1192077 00:07:19.495 15:57:50 -- event/cpu_locks.sh@108 -- # killprocess 1192346 00:07:19.495 15:57:50 -- common/autotest_common.sh@936 -- # '[' -z 1192346 ']' 00:07:19.495 15:57:50 -- common/autotest_common.sh@940 -- # kill -0 1192346 00:07:19.495 15:57:50 -- common/autotest_common.sh@941 -- # uname 00:07:19.495 15:57:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:19.495 15:57:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192346 00:07:19.495 15:57:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:19.495 15:57:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:19.495 15:57:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192346' 00:07:19.495 killing process with pid 1192346 00:07:19.495 15:57:50 -- common/autotest_common.sh@955 -- # kill 1192346 00:07:19.495 15:57:50 -- common/autotest_common.sh@960 -- # wait 1192346 00:07:19.754 00:07:19.754 real 0m3.668s 00:07:19.754 user 0m3.940s 00:07:19.754 sys 0m1.207s 00:07:19.754 15:57:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:19.754 15:57:50 -- common/autotest_common.sh@10 -- # set +x 00:07:19.754 ************************************ 00:07:19.754 END TEST locking_app_on_unlocked_coremask 00:07:19.754 ************************************ 00:07:19.754 15:57:50 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:19.754 15:57:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:19.754 15:57:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:19.754 15:57:50 -- common/autotest_common.sh@10 -- # set +x 00:07:19.754 ************************************ 00:07:19.754 START TEST locking_app_on_locked_coremask 00:07:19.754 ************************************ 00:07:19.754 15:57:50 -- common/autotest_common.sh@1114 -- # locking_app_on_locked_coremask 00:07:19.754 15:57:50 -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:19.754 15:57:50 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1192870 00:07:19.754 15:57:50 -- event/cpu_locks.sh@116 -- # waitforlisten 1192870 /var/tmp/spdk.sock 00:07:19.754 15:57:50 -- common/autotest_common.sh@829 -- # '[' -z 1192870 ']' 00:07:19.754 15:57:50 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:19.754 15:57:50 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:19.754 15:57:50 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:19.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:19.754 15:57:50 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:19.754 15:57:50 -- common/autotest_common.sh@10 -- # set +x 00:07:19.754 [2024-11-20 15:57:50.508676] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:19.754 [2024-11-20 15:57:50.508731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192870 ] 00:07:19.754 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.013 [2024-11-20 15:57:50.577524] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:20.013 [2024-11-20 15:57:50.614965] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:20.014 [2024-11-20 15:57:50.615094] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.582 15:57:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:20.582 15:57:51 -- common/autotest_common.sh@862 -- # return 0 00:07:20.582 15:57:51 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1192926 00:07:20.582 15:57:51 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1192926 /var/tmp/spdk2.sock 00:07:20.582 15:57:51 -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:20.582 15:57:51 -- common/autotest_common.sh@650 -- # local es=0 00:07:20.582 15:57:51 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1192926 /var/tmp/spdk2.sock 00:07:20.582 15:57:51 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:20.582 15:57:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.582 15:57:51 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:20.582 15:57:51 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:20.582 15:57:51 -- common/autotest_common.sh@653 -- # waitforlisten 1192926 /var/tmp/spdk2.sock 00:07:20.582 15:57:51 -- common/autotest_common.sh@829 -- # '[' -z 1192926 ']' 00:07:20.582 15:57:51 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:20.582 15:57:51 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.582 15:57:51 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:20.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:20.582 15:57:51 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.582 15:57:51 -- common/autotest_common.sh@10 -- # set +x 00:07:20.582 [2024-11-20 15:57:51.365768] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:20.582 [2024-11-20 15:57:51.365824] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1192926 ] 00:07:20.841 EAL: No free 2048 kB hugepages reported on node 1 00:07:20.841 [2024-11-20 15:57:51.460205] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1192870 has claimed it. 00:07:20.841 [2024-11-20 15:57:51.460243] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:21.410 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1192926) - No such process 00:07:21.410 ERROR: process (pid: 1192926) is no longer running 00:07:21.410 15:57:51 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.410 15:57:51 -- common/autotest_common.sh@862 -- # return 1 00:07:21.410 15:57:51 -- common/autotest_common.sh@653 -- # es=1 00:07:21.410 15:57:51 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:21.410 15:57:51 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:21.410 15:57:51 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:21.410 15:57:51 -- event/cpu_locks.sh@122 -- # locks_exist 1192870 00:07:21.410 15:57:51 -- event/cpu_locks.sh@22 -- # lslocks -p 1192870 00:07:21.410 15:57:51 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:21.669 lslocks: write error 00:07:21.669 15:57:52 -- event/cpu_locks.sh@124 -- # killprocess 1192870 00:07:21.669 15:57:52 -- common/autotest_common.sh@936 -- # '[' -z 1192870 ']' 00:07:21.669 15:57:52 -- common/autotest_common.sh@940 -- # kill -0 1192870 00:07:21.669 15:57:52 -- common/autotest_common.sh@941 -- # uname 00:07:21.669 15:57:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:21.669 15:57:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1192870 00:07:21.669 15:57:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:21.669 15:57:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:21.669 15:57:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1192870' 00:07:21.669 killing process with pid 1192870 00:07:21.669 15:57:52 -- common/autotest_common.sh@955 -- # kill 1192870 00:07:21.669 15:57:52 -- common/autotest_common.sh@960 -- # wait 1192870 00:07:21.929 00:07:21.929 real 0m2.214s 00:07:21.929 user 0m2.454s 00:07:21.929 sys 0m0.625s 00:07:21.929 15:57:52 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:21.929 15:57:52 -- common/autotest_common.sh@10 -- # set +x 00:07:21.929 ************************************ 00:07:21.929 END TEST locking_app_on_locked_coremask 00:07:21.929 ************************************ 00:07:21.929 15:57:52 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:21.929 15:57:52 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:21.929 15:57:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:21.929 15:57:52 -- common/autotest_common.sh@10 -- # set +x 00:07:21.929 ************************************ 00:07:21.929 START TEST locking_overlapped_coremask 00:07:21.929 ************************************ 00:07:21.929 15:57:52 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask 00:07:21.929 15:57:52 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1193227 00:07:21.929 15:57:52 -- event/cpu_locks.sh@133 -- # waitforlisten 1193227 /var/tmp/spdk.sock 00:07:21.929 15:57:52 -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:21.929 15:57:52 -- common/autotest_common.sh@829 -- # '[' -z 1193227 ']' 00:07:21.929 15:57:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.929 15:57:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:21.929 15:57:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.929 15:57:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:21.929 15:57:52 -- common/autotest_common.sh@10 -- # set +x 00:07:22.188 [2024-11-20 15:57:52.775854] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:22.188 [2024-11-20 15:57:52.775911] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193227 ] 00:07:22.188 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.188 [2024-11-20 15:57:52.848601] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.188 [2024-11-20 15:57:52.888150] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:22.188 [2024-11-20 15:57:52.888292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.188 [2024-11-20 15:57:52.888314] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.188 [2024-11-20 15:57:52.888317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.126 15:57:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.126 15:57:53 -- common/autotest_common.sh@862 -- # return 0 00:07:23.126 15:57:53 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1193467 00:07:23.126 15:57:53 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1193467 /var/tmp/spdk2.sock 00:07:23.126 15:57:53 -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:23.126 15:57:53 -- common/autotest_common.sh@650 -- # local es=0 00:07:23.126 15:57:53 -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1193467 /var/tmp/spdk2.sock 00:07:23.126 15:57:53 -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:23.126 15:57:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.126 15:57:53 -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:23.126 15:57:53 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:23.126 15:57:53 -- common/autotest_common.sh@653 -- # waitforlisten 1193467 /var/tmp/spdk2.sock 00:07:23.126 15:57:53 -- common/autotest_common.sh@829 -- # '[' -z 1193467 ']' 00:07:23.126 15:57:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:23.126 15:57:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.126 15:57:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:23.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:23.126 15:57:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.126 15:57:53 -- common/autotest_common.sh@10 -- # set +x 00:07:23.126 [2024-11-20 15:57:53.639787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.126 [2024-11-20 15:57:53.639839] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193467 ] 00:07:23.126 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.126 [2024-11-20 15:57:53.739146] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1193227 has claimed it. 00:07:23.126 [2024-11-20 15:57:53.739186] app.c: 791:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:23.695 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1193467) - No such process 00:07:23.695 ERROR: process (pid: 1193467) is no longer running 00:07:23.695 15:57:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.695 15:57:54 -- common/autotest_common.sh@862 -- # return 1 00:07:23.695 15:57:54 -- common/autotest_common.sh@653 -- # es=1 00:07:23.695 15:57:54 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:23.695 15:57:54 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:23.695 15:57:54 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:23.695 15:57:54 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:23.695 15:57:54 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:23.695 15:57:54 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:23.695 15:57:54 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:23.695 15:57:54 -- event/cpu_locks.sh@141 -- # killprocess 1193227 00:07:23.695 15:57:54 -- common/autotest_common.sh@936 -- # '[' -z 1193227 ']' 00:07:23.695 15:57:54 -- common/autotest_common.sh@940 -- # kill -0 1193227 00:07:23.695 15:57:54 -- common/autotest_common.sh@941 -- # uname 00:07:23.695 15:57:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:23.695 15:57:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1193227 00:07:23.695 15:57:54 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:23.695 15:57:54 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:23.695 15:57:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1193227' 00:07:23.695 killing process with pid 1193227 00:07:23.695 15:57:54 -- common/autotest_common.sh@955 -- # kill 1193227 00:07:23.695 15:57:54 -- common/autotest_common.sh@960 -- # wait 1193227 00:07:23.954 00:07:23.954 real 0m1.912s 00:07:23.954 user 0m5.472s 00:07:23.955 sys 0m0.456s 00:07:23.955 15:57:54 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:23.955 15:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.955 ************************************ 00:07:23.955 END TEST locking_overlapped_coremask 00:07:23.955 ************************************ 00:07:23.955 15:57:54 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:23.955 15:57:54 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:23.955 15:57:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:23.955 15:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.955 ************************************ 00:07:23.955 START TEST locking_overlapped_coremask_via_rpc 00:07:23.955 ************************************ 00:07:23.955 15:57:54 -- common/autotest_common.sh@1114 -- # locking_overlapped_coremask_via_rpc 00:07:23.955 15:57:54 -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:23.955 15:57:54 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1193551 00:07:23.955 15:57:54 -- event/cpu_locks.sh@149 -- # waitforlisten 1193551 /var/tmp/spdk.sock 00:07:23.955 15:57:54 -- common/autotest_common.sh@829 -- # '[' -z 1193551 ']' 00:07:23.955 15:57:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.955 15:57:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:23.955 15:57:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.955 15:57:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:23.955 15:57:54 -- common/autotest_common.sh@10 -- # set +x 00:07:23.955 [2024-11-20 15:57:54.720050] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:23.955 [2024-11-20 15:57:54.720103] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193551 ] 00:07:23.955 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.214 [2024-11-20 15:57:54.795151] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:24.214 [2024-11-20 15:57:54.795185] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:24.214 [2024-11-20 15:57:54.847247] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:24.214 [2024-11-20 15:57:54.847434] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:07:24.214 [2024-11-20 15:57:54.847550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:24.214 [2024-11-20 15:57:54.847556] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.152 15:57:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.152 15:57:55 -- common/autotest_common.sh@862 -- # return 0 00:07:25.152 15:57:55 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1193810 00:07:25.152 15:57:55 -- event/cpu_locks.sh@153 -- # waitforlisten 1193810 /var/tmp/spdk2.sock 00:07:25.152 15:57:55 -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:25.152 15:57:55 -- common/autotest_common.sh@829 -- # '[' -z 1193810 ']' 00:07:25.152 15:57:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.152 15:57:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.152 15:57:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.152 15:57:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.152 15:57:55 -- common/autotest_common.sh@10 -- # set +x 00:07:25.152 [2024-11-20 15:57:55.713578] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:25.152 [2024-11-20 15:57:55.713628] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1193810 ] 00:07:25.152 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.152 [2024-11-20 15:57:55.810894] app.c: 795:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:25.152 [2024-11-20 15:57:55.810919] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.152 [2024-11-20 15:57:55.885101] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:25.152 [2024-11-20 15:57:55.885278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:07:25.152 [2024-11-20 15:57:55.888563] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.152 [2024-11-20 15:57:55.888565] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:07:25.721 15:57:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.721 15:57:56 -- common/autotest_common.sh@862 -- # return 0 00:07:25.721 15:57:56 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:25.721 15:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.721 15:57:56 -- common/autotest_common.sh@10 -- # set +x 00:07:25.980 15:57:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:25.980 15:57:56 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:25.980 15:57:56 -- common/autotest_common.sh@650 -- # local es=0 00:07:25.980 15:57:56 -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:25.980 15:57:56 -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:25.980 15:57:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.980 15:57:56 -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:25.980 15:57:56 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.980 15:57:56 -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:25.980 15:57:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:25.980 15:57:56 -- common/autotest_common.sh@10 -- # set +x 00:07:25.980 [2024-11-20 15:57:56.538584] app.c: 665:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1193551 has claimed it. 00:07:25.980 request: 00:07:25.980 { 00:07:25.980 "method": "framework_enable_cpumask_locks", 00:07:25.980 "req_id": 1 00:07:25.980 } 00:07:25.980 Got JSON-RPC error response 00:07:25.980 response: 00:07:25.980 { 00:07:25.980 "code": -32603, 00:07:25.980 "message": "Failed to claim CPU core: 2" 00:07:25.980 } 00:07:25.980 15:57:56 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:25.980 15:57:56 -- common/autotest_common.sh@653 -- # es=1 00:07:25.980 15:57:56 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.980 15:57:56 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.980 15:57:56 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.980 15:57:56 -- event/cpu_locks.sh@158 -- # waitforlisten 1193551 /var/tmp/spdk.sock 00:07:25.980 15:57:56 -- common/autotest_common.sh@829 -- # '[' -z 1193551 ']' 00:07:25.980 15:57:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.980 15:57:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.980 15:57:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.980 15:57:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.980 15:57:56 -- common/autotest_common.sh@10 -- # set +x 00:07:25.980 15:57:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:25.980 15:57:56 -- common/autotest_common.sh@862 -- # return 0 00:07:25.980 15:57:56 -- event/cpu_locks.sh@159 -- # waitforlisten 1193810 /var/tmp/spdk2.sock 00:07:25.980 15:57:56 -- common/autotest_common.sh@829 -- # '[' -z 1193810 ']' 00:07:25.980 15:57:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:25.980 15:57:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:25.980 15:57:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:25.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:25.980 15:57:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:25.980 15:57:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.239 15:57:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.239 15:57:56 -- common/autotest_common.sh@862 -- # return 0 00:07:26.239 15:57:56 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:26.239 15:57:56 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:26.239 15:57:56 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:26.239 15:57:56 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:26.239 00:07:26.239 real 0m2.229s 00:07:26.239 user 0m0.986s 00:07:26.239 sys 0m0.181s 00:07:26.239 15:57:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:26.239 15:57:56 -- common/autotest_common.sh@10 -- # set +x 00:07:26.239 ************************************ 00:07:26.239 END TEST locking_overlapped_coremask_via_rpc 00:07:26.239 ************************************ 00:07:26.239 15:57:56 -- event/cpu_locks.sh@174 -- # cleanup 00:07:26.239 15:57:56 -- event/cpu_locks.sh@15 -- # [[ -z 1193551 ]] 00:07:26.239 15:57:56 -- event/cpu_locks.sh@15 -- # killprocess 1193551 00:07:26.239 15:57:56 -- common/autotest_common.sh@936 -- # '[' -z 1193551 ']' 00:07:26.239 15:57:56 -- common/autotest_common.sh@940 -- # kill -0 1193551 00:07:26.239 15:57:56 -- common/autotest_common.sh@941 -- # uname 00:07:26.239 15:57:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:26.239 15:57:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1193551 00:07:26.239 15:57:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:26.239 15:57:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:26.239 15:57:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1193551' 00:07:26.239 killing process with pid 1193551 00:07:26.239 15:57:57 -- common/autotest_common.sh@955 -- # kill 1193551 00:07:26.239 15:57:57 -- common/autotest_common.sh@960 -- # wait 1193551 00:07:26.808 15:57:57 -- event/cpu_locks.sh@16 -- # [[ -z 1193810 ]] 00:07:26.808 15:57:57 -- event/cpu_locks.sh@16 -- # killprocess 1193810 00:07:26.808 15:57:57 -- common/autotest_common.sh@936 -- # '[' -z 1193810 ']' 00:07:26.808 15:57:57 -- common/autotest_common.sh@940 -- # kill -0 1193810 00:07:26.808 15:57:57 -- common/autotest_common.sh@941 -- # uname 00:07:26.808 15:57:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:26.808 15:57:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1193810 00:07:26.808 15:57:57 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:07:26.808 15:57:57 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:07:26.808 15:57:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1193810' 00:07:26.808 killing process with pid 1193810 00:07:26.808 15:57:57 -- common/autotest_common.sh@955 -- # kill 1193810 00:07:26.808 15:57:57 -- common/autotest_common.sh@960 -- # wait 1193810 00:07:27.067 15:57:57 -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.067 15:57:57 -- event/cpu_locks.sh@1 -- # cleanup 00:07:27.067 15:57:57 -- event/cpu_locks.sh@15 -- # [[ -z 1193551 ]] 00:07:27.067 15:57:57 -- event/cpu_locks.sh@15 -- # killprocess 1193551 00:07:27.067 15:57:57 -- common/autotest_common.sh@936 -- # '[' -z 1193551 ']' 00:07:27.067 15:57:57 -- common/autotest_common.sh@940 -- # kill -0 1193551 00:07:27.067 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1193551) - No such process 00:07:27.067 15:57:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1193551 is not found' 00:07:27.067 Process with pid 1193551 is not found 00:07:27.067 15:57:57 -- event/cpu_locks.sh@16 -- # [[ -z 1193810 ]] 00:07:27.067 15:57:57 -- event/cpu_locks.sh@16 -- # killprocess 1193810 00:07:27.067 15:57:57 -- common/autotest_common.sh@936 -- # '[' -z 1193810 ']' 00:07:27.067 15:57:57 -- common/autotest_common.sh@940 -- # kill -0 1193810 00:07:27.067 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh: line 940: kill: (1193810) - No such process 00:07:27.067 15:57:57 -- common/autotest_common.sh@963 -- # echo 'Process with pid 1193810 is not found' 00:07:27.067 Process with pid 1193810 is not found 00:07:27.067 15:57:57 -- event/cpu_locks.sh@18 -- # rm -f 00:07:27.067 00:07:27.067 real 0m18.321s 00:07:27.067 user 0m31.498s 00:07:27.067 sys 0m5.879s 00:07:27.067 15:57:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.067 15:57:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.067 ************************************ 00:07:27.067 END TEST cpu_locks 00:07:27.067 ************************************ 00:07:27.067 00:07:27.067 real 0m43.984s 00:07:27.067 user 1m23.888s 00:07:27.067 sys 0m9.886s 00:07:27.067 15:57:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:27.067 15:57:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.067 ************************************ 00:07:27.067 END TEST event 00:07:27.067 ************************************ 00:07:27.067 15:57:57 -- spdk/autotest.sh@175 -- # run_test thread /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:27.067 15:57:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:27.067 15:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.067 15:57:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.067 ************************************ 00:07:27.067 START TEST thread 00:07:27.067 ************************************ 00:07:27.067 15:57:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/thread.sh 00:07:27.328 * Looking for test storage... 00:07:27.328 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread 00:07:27.328 15:57:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:27.328 15:57:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:27.328 15:57:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:27.328 15:57:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:27.328 15:57:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:27.328 15:57:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:27.328 15:57:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:27.328 15:57:57 -- scripts/common.sh@335 -- # IFS=.-: 00:07:27.328 15:57:57 -- scripts/common.sh@335 -- # read -ra ver1 00:07:27.328 15:57:57 -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.328 15:57:57 -- scripts/common.sh@336 -- # read -ra ver2 00:07:27.328 15:57:57 -- scripts/common.sh@337 -- # local 'op=<' 00:07:27.328 15:57:57 -- scripts/common.sh@339 -- # ver1_l=2 00:07:27.328 15:57:57 -- scripts/common.sh@340 -- # ver2_l=1 00:07:27.328 15:57:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:27.328 15:57:57 -- scripts/common.sh@343 -- # case "$op" in 00:07:27.328 15:57:57 -- scripts/common.sh@344 -- # : 1 00:07:27.328 15:57:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:27.328 15:57:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.328 15:57:57 -- scripts/common.sh@364 -- # decimal 1 00:07:27.328 15:57:57 -- scripts/common.sh@352 -- # local d=1 00:07:27.328 15:57:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.328 15:57:57 -- scripts/common.sh@354 -- # echo 1 00:07:27.328 15:57:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:27.328 15:57:57 -- scripts/common.sh@365 -- # decimal 2 00:07:27.328 15:57:57 -- scripts/common.sh@352 -- # local d=2 00:07:27.328 15:57:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.328 15:57:57 -- scripts/common.sh@354 -- # echo 2 00:07:27.328 15:57:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:27.328 15:57:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:27.328 15:57:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:27.328 15:57:57 -- scripts/common.sh@367 -- # return 0 00:07:27.328 15:57:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.328 15:57:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:27.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.328 --rc genhtml_branch_coverage=1 00:07:27.328 --rc genhtml_function_coverage=1 00:07:27.328 --rc genhtml_legend=1 00:07:27.328 --rc geninfo_all_blocks=1 00:07:27.328 --rc geninfo_unexecuted_blocks=1 00:07:27.328 00:07:27.328 ' 00:07:27.328 15:57:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:27.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.328 --rc genhtml_branch_coverage=1 00:07:27.328 --rc genhtml_function_coverage=1 00:07:27.328 --rc genhtml_legend=1 00:07:27.328 --rc geninfo_all_blocks=1 00:07:27.328 --rc geninfo_unexecuted_blocks=1 00:07:27.328 00:07:27.328 ' 00:07:27.328 15:57:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:27.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.328 --rc genhtml_branch_coverage=1 00:07:27.328 --rc genhtml_function_coverage=1 00:07:27.328 --rc genhtml_legend=1 00:07:27.328 --rc geninfo_all_blocks=1 00:07:27.328 --rc geninfo_unexecuted_blocks=1 00:07:27.328 00:07:27.328 ' 00:07:27.328 15:57:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:27.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.328 --rc genhtml_branch_coverage=1 00:07:27.328 --rc genhtml_function_coverage=1 00:07:27.328 --rc genhtml_legend=1 00:07:27.328 --rc geninfo_all_blocks=1 00:07:27.328 --rc geninfo_unexecuted_blocks=1 00:07:27.328 00:07:27.328 ' 00:07:27.328 15:57:57 -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.328 15:57:57 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:27.328 15:57:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:27.328 15:57:57 -- common/autotest_common.sh@10 -- # set +x 00:07:27.329 ************************************ 00:07:27.329 START TEST thread_poller_perf 00:07:27.329 ************************************ 00:07:27.329 15:57:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:27.329 [2024-11-20 15:57:57.997946] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:27.329 [2024-11-20 15:57:57.998038] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194298 ] 00:07:27.329 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.329 [2024-11-20 15:57:58.070942] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.329 [2024-11-20 15:57:58.107410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.329 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:28.707 [2024-11-20T14:57:59.512Z] ====================================== 00:07:28.707 [2024-11-20T14:57:59.512Z] busy:2506281426 (cyc) 00:07:28.707 [2024-11-20T14:57:59.512Z] total_run_count: 410000 00:07:28.707 [2024-11-20T14:57:59.512Z] tsc_hz: 2500000000 (cyc) 00:07:28.707 [2024-11-20T14:57:59.512Z] ====================================== 00:07:28.707 [2024-11-20T14:57:59.512Z] poller_cost: 6112 (cyc), 2444 (nsec) 00:07:28.707 00:07:28.707 real 0m1.194s 00:07:28.707 user 0m1.099s 00:07:28.707 sys 0m0.091s 00:07:28.707 15:57:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:28.707 15:57:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.707 ************************************ 00:07:28.707 END TEST thread_poller_perf 00:07:28.707 ************************************ 00:07:28.707 15:57:59 -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:28.707 15:57:59 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:07:28.707 15:57:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:28.707 15:57:59 -- common/autotest_common.sh@10 -- # set +x 00:07:28.707 ************************************ 00:07:28.707 START TEST thread_poller_perf 00:07:28.707 ************************************ 00:07:28.707 15:57:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:28.707 [2024-11-20 15:57:59.241472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:28.707 [2024-11-20 15:57:59.241573] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194474 ] 00:07:28.707 EAL: No free 2048 kB hugepages reported on node 1 00:07:28.707 [2024-11-20 15:57:59.314092] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.707 [2024-11-20 15:57:59.350691] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.707 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:29.642 [2024-11-20T14:58:00.447Z] ====================================== 00:07:29.642 [2024-11-20T14:58:00.447Z] busy:2502220768 (cyc) 00:07:29.642 [2024-11-20T14:58:00.447Z] total_run_count: 5502000 00:07:29.642 [2024-11-20T14:58:00.447Z] tsc_hz: 2500000000 (cyc) 00:07:29.642 [2024-11-20T14:58:00.447Z] ====================================== 00:07:29.642 [2024-11-20T14:58:00.447Z] poller_cost: 454 (cyc), 181 (nsec) 00:07:29.642 00:07:29.642 real 0m1.192s 00:07:29.642 user 0m1.095s 00:07:29.642 sys 0m0.092s 00:07:29.642 15:58:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.642 15:58:00 -- common/autotest_common.sh@10 -- # set +x 00:07:29.642 ************************************ 00:07:29.642 END TEST thread_poller_perf 00:07:29.642 ************************************ 00:07:29.910 15:58:00 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:29.910 00:07:29.910 real 0m2.664s 00:07:29.910 user 0m2.327s 00:07:29.910 sys 0m0.360s 00:07:29.910 15:58:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:29.910 15:58:00 -- common/autotest_common.sh@10 -- # set +x 00:07:29.910 ************************************ 00:07:29.910 END TEST thread 00:07:29.910 ************************************ 00:07:29.910 15:58:00 -- spdk/autotest.sh@176 -- # run_test accel /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:29.910 15:58:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:07:29.910 15:58:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:29.910 15:58:00 -- common/autotest_common.sh@10 -- # set +x 00:07:29.910 ************************************ 00:07:29.910 START TEST accel 00:07:29.910 ************************************ 00:07:29.910 15:58:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel.sh 00:07:29.910 * Looking for test storage... 00:07:29.910 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:07:29.910 15:58:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:07:29.910 15:58:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:07:29.910 15:58:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:07:29.910 15:58:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:07:29.910 15:58:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:07:29.910 15:58:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:07:29.910 15:58:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:07:29.910 15:58:00 -- scripts/common.sh@335 -- # IFS=.-: 00:07:29.910 15:58:00 -- scripts/common.sh@335 -- # read -ra ver1 00:07:29.910 15:58:00 -- scripts/common.sh@336 -- # IFS=.-: 00:07:29.910 15:58:00 -- scripts/common.sh@336 -- # read -ra ver2 00:07:29.910 15:58:00 -- scripts/common.sh@337 -- # local 'op=<' 00:07:29.910 15:58:00 -- scripts/common.sh@339 -- # ver1_l=2 00:07:29.910 15:58:00 -- scripts/common.sh@340 -- # ver2_l=1 00:07:29.910 15:58:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:07:29.910 15:58:00 -- scripts/common.sh@343 -- # case "$op" in 00:07:29.910 15:58:00 -- scripts/common.sh@344 -- # : 1 00:07:29.910 15:58:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:07:29.910 15:58:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:29.910 15:58:00 -- scripts/common.sh@364 -- # decimal 1 00:07:29.910 15:58:00 -- scripts/common.sh@352 -- # local d=1 00:07:29.910 15:58:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:29.910 15:58:00 -- scripts/common.sh@354 -- # echo 1 00:07:29.910 15:58:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:07:29.910 15:58:00 -- scripts/common.sh@365 -- # decimal 2 00:07:29.910 15:58:00 -- scripts/common.sh@352 -- # local d=2 00:07:29.910 15:58:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:29.910 15:58:00 -- scripts/common.sh@354 -- # echo 2 00:07:29.910 15:58:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:07:29.910 15:58:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:07:29.910 15:58:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:07:29.910 15:58:00 -- scripts/common.sh@367 -- # return 0 00:07:29.910 15:58:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:29.910 15:58:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:07:29.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.910 --rc genhtml_branch_coverage=1 00:07:29.910 --rc genhtml_function_coverage=1 00:07:29.910 --rc genhtml_legend=1 00:07:29.910 --rc geninfo_all_blocks=1 00:07:29.910 --rc geninfo_unexecuted_blocks=1 00:07:29.910 00:07:29.910 ' 00:07:29.910 15:58:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:07:29.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.910 --rc genhtml_branch_coverage=1 00:07:29.910 --rc genhtml_function_coverage=1 00:07:29.910 --rc genhtml_legend=1 00:07:29.910 --rc geninfo_all_blocks=1 00:07:29.910 --rc geninfo_unexecuted_blocks=1 00:07:29.910 00:07:29.910 ' 00:07:29.910 15:58:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:07:29.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.910 --rc genhtml_branch_coverage=1 00:07:29.910 --rc genhtml_function_coverage=1 00:07:29.910 --rc genhtml_legend=1 00:07:29.910 --rc geninfo_all_blocks=1 00:07:29.910 --rc geninfo_unexecuted_blocks=1 00:07:29.910 00:07:29.910 ' 00:07:29.910 15:58:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:07:29.910 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:29.910 --rc genhtml_branch_coverage=1 00:07:29.910 --rc genhtml_function_coverage=1 00:07:29.910 --rc genhtml_legend=1 00:07:29.910 --rc geninfo_all_blocks=1 00:07:29.910 --rc geninfo_unexecuted_blocks=1 00:07:29.910 00:07:29.910 ' 00:07:29.910 15:58:00 -- accel/accel.sh@73 -- # declare -A expected_opcs 00:07:29.910 15:58:00 -- accel/accel.sh@74 -- # get_expected_opcs 00:07:29.910 15:58:00 -- accel/accel.sh@57 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:29.910 15:58:00 -- accel/accel.sh@59 -- # spdk_tgt_pid=1194805 00:07:29.910 15:58:00 -- accel/accel.sh@60 -- # waitforlisten 1194805 00:07:29.910 15:58:00 -- common/autotest_common.sh@829 -- # '[' -z 1194805 ']' 00:07:29.910 15:58:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.910 15:58:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:29.910 15:58:00 -- accel/accel.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:07:29.910 15:58:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.911 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.911 15:58:00 -- accel/accel.sh@58 -- # build_accel_config 00:07:29.911 15:58:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:29.911 15:58:00 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:29.911 15:58:00 -- common/autotest_common.sh@10 -- # set +x 00:07:29.911 15:58:00 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:29.911 15:58:00 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:29.911 15:58:00 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:29.911 15:58:00 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:29.911 15:58:00 -- accel/accel.sh@41 -- # local IFS=, 00:07:29.911 15:58:00 -- accel/accel.sh@42 -- # jq -r . 00:07:30.169 [2024-11-20 15:58:00.736553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:30.169 [2024-11-20 15:58:00.736611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1194805 ] 00:07:30.169 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.169 [2024-11-20 15:58:00.806008] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.169 [2024-11-20 15:58:00.841768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:07:30.169 [2024-11-20 15:58:00.841913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.738 15:58:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:30.738 15:58:01 -- common/autotest_common.sh@862 -- # return 0 00:07:30.738 15:58:01 -- accel/accel.sh@62 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:07:30.738 15:58:01 -- accel/accel.sh@62 -- # rpc_cmd accel_get_opc_assignments 00:07:30.738 15:58:01 -- accel/accel.sh@62 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:07:30.738 15:58:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:30.738 15:58:01 -- common/autotest_common.sh@10 -- # set +x 00:07:30.998 15:58:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@63 -- # for opc_opt in "${exp_opcs[@]}" 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # IFS== 00:07:30.998 15:58:01 -- accel/accel.sh@64 -- # read -r opc module 00:07:30.998 15:58:01 -- accel/accel.sh@65 -- # expected_opcs["$opc"]=software 00:07:30.998 15:58:01 -- accel/accel.sh@67 -- # killprocess 1194805 00:07:30.998 15:58:01 -- common/autotest_common.sh@936 -- # '[' -z 1194805 ']' 00:07:30.998 15:58:01 -- common/autotest_common.sh@940 -- # kill -0 1194805 00:07:30.998 15:58:01 -- common/autotest_common.sh@941 -- # uname 00:07:30.998 15:58:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:07:30.998 15:58:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1194805 00:07:30.998 15:58:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:07:30.998 15:58:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:07:30.998 15:58:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1194805' 00:07:30.998 killing process with pid 1194805 00:07:30.998 15:58:01 -- common/autotest_common.sh@955 -- # kill 1194805 00:07:30.998 15:58:01 -- common/autotest_common.sh@960 -- # wait 1194805 00:07:31.258 15:58:01 -- accel/accel.sh@68 -- # trap - ERR 00:07:31.258 15:58:01 -- accel/accel.sh@81 -- # run_test accel_help accel_perf -h 00:07:31.258 15:58:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:07:31.258 15:58:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.258 15:58:01 -- common/autotest_common.sh@10 -- # set +x 00:07:31.258 15:58:01 -- common/autotest_common.sh@1114 -- # accel_perf -h 00:07:31.258 15:58:01 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:07:31.258 15:58:01 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.258 15:58:01 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.258 15:58:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.258 15:58:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.258 15:58:01 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.258 15:58:01 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.258 15:58:01 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.258 15:58:01 -- accel/accel.sh@42 -- # jq -r . 00:07:31.258 15:58:01 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.258 15:58:01 -- common/autotest_common.sh@10 -- # set +x 00:07:31.258 15:58:02 -- accel/accel.sh@83 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:07:31.258 15:58:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:31.258 15:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.258 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.258 ************************************ 00:07:31.258 START TEST accel_missing_filename 00:07:31.258 ************************************ 00:07:31.258 15:58:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress 00:07:31.258 15:58:02 -- common/autotest_common.sh@650 -- # local es=0 00:07:31.258 15:58:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress 00:07:31.258 15:58:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:31.258 15:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.258 15:58:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:31.258 15:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.258 15:58:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress 00:07:31.258 15:58:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:07:31.258 15:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.258 15:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.258 15:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.258 15:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.258 15:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.258 15:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.258 15:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.258 15:58:02 -- accel/accel.sh@42 -- # jq -r . 00:07:31.258 [2024-11-20 15:58:02.049617] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.258 [2024-11-20 15:58:02.049690] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195109 ] 00:07:31.518 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.518 [2024-11-20 15:58:02.122624] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.518 [2024-11-20 15:58:02.158993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.518 [2024-11-20 15:58:02.199777] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:31.518 [2024-11-20 15:58:02.259638] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:31.518 A filename is required. 00:07:31.518 15:58:02 -- common/autotest_common.sh@653 -- # es=234 00:07:31.518 15:58:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:31.518 15:58:02 -- common/autotest_common.sh@662 -- # es=106 00:07:31.518 15:58:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:31.518 15:58:02 -- common/autotest_common.sh@670 -- # es=1 00:07:31.518 15:58:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:31.518 00:07:31.518 real 0m0.301s 00:07:31.518 user 0m0.202s 00:07:31.518 sys 0m0.135s 00:07:31.518 15:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:31.518 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.518 ************************************ 00:07:31.518 END TEST accel_missing_filename 00:07:31.518 ************************************ 00:07:31.777 15:58:02 -- accel/accel.sh@85 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:31.777 15:58:02 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:31.777 15:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:31.777 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:31.777 ************************************ 00:07:31.777 START TEST accel_compress_verify 00:07:31.777 ************************************ 00:07:31.777 15:58:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:31.777 15:58:02 -- common/autotest_common.sh@650 -- # local es=0 00:07:31.777 15:58:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:31.777 15:58:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:31.777 15:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.777 15:58:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:31.777 15:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:31.777 15:58:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:31.777 15:58:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:07:31.777 15:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:31.777 15:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:31.777 15:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:31.777 15:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:31.777 15:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:31.777 15:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:31.777 15:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:31.777 15:58:02 -- accel/accel.sh@42 -- # jq -r . 00:07:31.777 [2024-11-20 15:58:02.394713] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:31.777 [2024-11-20 15:58:02.394786] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195138 ] 00:07:31.777 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.777 [2024-11-20 15:58:02.467820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.777 [2024-11-20 15:58:02.504727] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.777 [2024-11-20 15:58:02.545782] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:32.037 [2024-11-20 15:58:02.606189] accel_perf.c:1385:main: *ERROR*: ERROR starting application 00:07:32.037 00:07:32.037 Compression does not support the verify option, aborting. 00:07:32.037 15:58:02 -- common/autotest_common.sh@653 -- # es=161 00:07:32.037 15:58:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.037 15:58:02 -- common/autotest_common.sh@662 -- # es=33 00:07:32.037 15:58:02 -- common/autotest_common.sh@663 -- # case "$es" in 00:07:32.037 15:58:02 -- common/autotest_common.sh@670 -- # es=1 00:07:32.037 15:58:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.037 00:07:32.037 real 0m0.302s 00:07:32.037 user 0m0.202s 00:07:32.037 sys 0m0.137s 00:07:32.037 15:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.037 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.037 ************************************ 00:07:32.037 END TEST accel_compress_verify 00:07:32.037 ************************************ 00:07:32.037 15:58:02 -- accel/accel.sh@87 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:07:32.037 15:58:02 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:32.037 15:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.037 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.037 ************************************ 00:07:32.037 START TEST accel_wrong_workload 00:07:32.037 ************************************ 00:07:32.037 15:58:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w foobar 00:07:32.037 15:58:02 -- common/autotest_common.sh@650 -- # local es=0 00:07:32.037 15:58:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:07:32.037 15:58:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:32.037 15:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.037 15:58:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:32.037 15:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.037 15:58:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w foobar 00:07:32.037 15:58:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:07:32.037 15:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.037 15:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.037 15:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.037 15:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.037 15:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.037 15:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.037 15:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.037 15:58:02 -- accel/accel.sh@42 -- # jq -r . 00:07:32.037 Unsupported workload type: foobar 00:07:32.037 [2024-11-20 15:58:02.747419] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:07:32.037 accel_perf options: 00:07:32.037 [-h help message] 00:07:32.037 [-q queue depth per core] 00:07:32.037 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:32.037 [-T number of threads per core 00:07:32.037 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:32.037 [-t time in seconds] 00:07:32.037 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:32.037 [ dif_verify, , dif_generate, dif_generate_copy 00:07:32.037 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:32.037 [-l for compress/decompress workloads, name of uncompressed input file 00:07:32.037 [-S for crc32c workload, use this seed value (default 0) 00:07:32.037 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:32.037 [-f for fill workload, use this BYTE value (default 255) 00:07:32.037 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:32.037 [-y verify result if this switch is on] 00:07:32.037 [-a tasks to allocate per core (default: same value as -q)] 00:07:32.037 Can be used to spread operations across a wider range of memory. 00:07:32.037 15:58:02 -- common/autotest_common.sh@653 -- # es=1 00:07:32.037 15:58:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.037 15:58:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.037 15:58:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.037 00:07:32.037 real 0m0.037s 00:07:32.037 user 0m0.025s 00:07:32.037 sys 0m0.012s 00:07:32.037 15:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.037 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.037 ************************************ 00:07:32.037 END TEST accel_wrong_workload 00:07:32.037 ************************************ 00:07:32.037 Error: writing output failed: Broken pipe 00:07:32.037 15:58:02 -- accel/accel.sh@89 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:07:32.037 15:58:02 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:07:32.037 15:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.037 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.037 ************************************ 00:07:32.037 START TEST accel_negative_buffers 00:07:32.037 ************************************ 00:07:32.037 15:58:02 -- common/autotest_common.sh@1114 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:07:32.037 15:58:02 -- common/autotest_common.sh@650 -- # local es=0 00:07:32.037 15:58:02 -- common/autotest_common.sh@652 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:07:32.037 15:58:02 -- common/autotest_common.sh@638 -- # local arg=accel_perf 00:07:32.037 15:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.037 15:58:02 -- common/autotest_common.sh@642 -- # type -t accel_perf 00:07:32.037 15:58:02 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:32.037 15:58:02 -- common/autotest_common.sh@653 -- # accel_perf -t 1 -w xor -y -x -1 00:07:32.037 15:58:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:07:32.037 15:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.037 15:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.037 15:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.037 15:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.037 15:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.037 15:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.037 15:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.037 15:58:02 -- accel/accel.sh@42 -- # jq -r . 00:07:32.037 -x option must be non-negative. 00:07:32.037 [2024-11-20 15:58:02.831462] app.c:1292:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:07:32.037 accel_perf options: 00:07:32.037 [-h help message] 00:07:32.037 [-q queue depth per core] 00:07:32.037 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:07:32.037 [-T number of threads per core 00:07:32.037 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:07:32.037 [-t time in seconds] 00:07:32.037 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:07:32.037 [ dif_verify, , dif_generate, dif_generate_copy 00:07:32.037 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:07:32.037 [-l for compress/decompress workloads, name of uncompressed input file 00:07:32.037 [-S for crc32c workload, use this seed value (default 0) 00:07:32.037 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:07:32.037 [-f for fill workload, use this BYTE value (default 255) 00:07:32.037 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:07:32.037 [-y verify result if this switch is on] 00:07:32.037 [-a tasks to allocate per core (default: same value as -q)] 00:07:32.037 Can be used to spread operations across a wider range of memory. 00:07:32.037 15:58:02 -- common/autotest_common.sh@653 -- # es=1 00:07:32.037 15:58:02 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:32.037 15:58:02 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:32.037 15:58:02 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:32.037 00:07:32.037 real 0m0.035s 00:07:32.037 user 0m0.015s 00:07:32.037 sys 0m0.020s 00:07:32.037 15:58:02 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:32.037 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.037 ************************************ 00:07:32.037 END TEST accel_negative_buffers 00:07:32.037 ************************************ 00:07:32.296 Error: writing output failed: Broken pipe 00:07:32.296 15:58:02 -- accel/accel.sh@93 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:07:32.296 15:58:02 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:32.296 15:58:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:32.296 15:58:02 -- common/autotest_common.sh@10 -- # set +x 00:07:32.296 ************************************ 00:07:32.296 START TEST accel_crc32c 00:07:32.296 ************************************ 00:07:32.296 15:58:02 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -S 32 -y 00:07:32.296 15:58:02 -- accel/accel.sh@16 -- # local accel_opc 00:07:32.296 15:58:02 -- accel/accel.sh@17 -- # local accel_module 00:07:32.296 15:58:02 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:32.296 15:58:02 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:32.296 15:58:02 -- accel/accel.sh@12 -- # build_accel_config 00:07:32.296 15:58:02 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:32.296 15:58:02 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:32.296 15:58:02 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:32.296 15:58:02 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:32.296 15:58:02 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:32.296 15:58:02 -- accel/accel.sh@41 -- # local IFS=, 00:07:32.296 15:58:02 -- accel/accel.sh@42 -- # jq -r . 00:07:32.296 [2024-11-20 15:58:02.903487] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:32.296 [2024-11-20 15:58:02.903550] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195425 ] 00:07:32.296 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.296 [2024-11-20 15:58:02.974877] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.296 [2024-11-20 15:58:03.013388] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.827 15:58:04 -- accel/accel.sh@18 -- # out=' 00:07:33.827 SPDK Configuration: 00:07:33.827 Core mask: 0x1 00:07:33.827 00:07:33.827 Accel Perf Configuration: 00:07:33.827 Workload Type: crc32c 00:07:33.827 CRC-32C seed: 32 00:07:33.827 Transfer size: 4096 bytes 00:07:33.827 Vector count 1 00:07:33.827 Module: software 00:07:33.827 Queue depth: 32 00:07:33.827 Allocate depth: 32 00:07:33.827 # threads/core: 1 00:07:33.827 Run time: 1 seconds 00:07:33.827 Verify: Yes 00:07:33.827 00:07:33.827 Running for 1 seconds... 00:07:33.827 00:07:33.827 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:33.827 ------------------------------------------------------------------------------------ 00:07:33.827 0,0 601696/s 2350 MiB/s 0 0 00:07:33.827 ==================================================================================== 00:07:33.827 Total 601696/s 2350 MiB/s 0 0' 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:07:33.827 15:58:04 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:07:33.827 15:58:04 -- accel/accel.sh@12 -- # build_accel_config 00:07:33.827 15:58:04 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:33.827 15:58:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:33.827 15:58:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:33.827 15:58:04 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:33.827 15:58:04 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:33.827 15:58:04 -- accel/accel.sh@41 -- # local IFS=, 00:07:33.827 15:58:04 -- accel/accel.sh@42 -- # jq -r . 00:07:33.827 [2024-11-20 15:58:04.206942] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:33.827 [2024-11-20 15:58:04.207027] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195577 ] 00:07:33.827 EAL: No free 2048 kB hugepages reported on node 1 00:07:33.827 [2024-11-20 15:58:04.278471] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.827 [2024-11-20 15:58:04.313463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val=0x1 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val=crc32c 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val=32 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val=software 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@23 -- # accel_module=software 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val=32 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val=32 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val=1 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val=Yes 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:33.827 15:58:04 -- accel/accel.sh@21 -- # val= 00:07:33.827 15:58:04 -- accel/accel.sh@22 -- # case "$var" in 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # IFS=: 00:07:33.827 15:58:04 -- accel/accel.sh@20 -- # read -r var val 00:07:34.765 15:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.765 15:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.765 15:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.765 15:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.765 15:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.765 15:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.765 15:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.765 15:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.765 15:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.765 15:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.765 15:58:05 -- accel/accel.sh@21 -- # val= 00:07:34.765 15:58:05 -- accel/accel.sh@22 -- # case "$var" in 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # IFS=: 00:07:34.765 15:58:05 -- accel/accel.sh@20 -- # read -r var val 00:07:34.765 15:58:05 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:34.765 15:58:05 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:34.765 15:58:05 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:34.765 00:07:34.765 real 0m2.607s 00:07:34.765 user 0m2.348s 00:07:34.765 sys 0m0.269s 00:07:34.765 15:58:05 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:34.765 15:58:05 -- common/autotest_common.sh@10 -- # set +x 00:07:34.765 ************************************ 00:07:34.765 END TEST accel_crc32c 00:07:34.765 ************************************ 00:07:34.765 15:58:05 -- accel/accel.sh@94 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:07:34.765 15:58:05 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:34.765 15:58:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:34.765 15:58:05 -- common/autotest_common.sh@10 -- # set +x 00:07:34.765 ************************************ 00:07:34.765 START TEST accel_crc32c_C2 00:07:34.765 ************************************ 00:07:34.765 15:58:05 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w crc32c -y -C 2 00:07:34.765 15:58:05 -- accel/accel.sh@16 -- # local accel_opc 00:07:34.765 15:58:05 -- accel/accel.sh@17 -- # local accel_module 00:07:34.765 15:58:05 -- accel/accel.sh@18 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:34.765 15:58:05 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:34.765 15:58:05 -- accel/accel.sh@12 -- # build_accel_config 00:07:34.765 15:58:05 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:34.765 15:58:05 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:34.765 15:58:05 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:34.765 15:58:05 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:34.765 15:58:05 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:34.765 15:58:05 -- accel/accel.sh@41 -- # local IFS=, 00:07:34.765 15:58:05 -- accel/accel.sh@42 -- # jq -r . 00:07:34.765 [2024-11-20 15:58:05.545456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:34.765 [2024-11-20 15:58:05.545509] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1195774 ] 00:07:35.025 EAL: No free 2048 kB hugepages reported on node 1 00:07:35.025 [2024-11-20 15:58:05.614594] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.025 [2024-11-20 15:58:05.650903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.405 15:58:06 -- accel/accel.sh@18 -- # out=' 00:07:36.405 SPDK Configuration: 00:07:36.405 Core mask: 0x1 00:07:36.405 00:07:36.405 Accel Perf Configuration: 00:07:36.405 Workload Type: crc32c 00:07:36.405 CRC-32C seed: 0 00:07:36.405 Transfer size: 4096 bytes 00:07:36.405 Vector count 2 00:07:36.405 Module: software 00:07:36.405 Queue depth: 32 00:07:36.405 Allocate depth: 32 00:07:36.405 # threads/core: 1 00:07:36.405 Run time: 1 seconds 00:07:36.405 Verify: Yes 00:07:36.405 00:07:36.405 Running for 1 seconds... 00:07:36.405 00:07:36.405 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:36.405 ------------------------------------------------------------------------------------ 00:07:36.405 0,0 480512/s 3754 MiB/s 0 0 00:07:36.405 ==================================================================================== 00:07:36.405 Total 480512/s 1877 MiB/s 0 0' 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:07:36.405 15:58:06 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:07:36.405 15:58:06 -- accel/accel.sh@12 -- # build_accel_config 00:07:36.405 15:58:06 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:36.405 15:58:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:36.405 15:58:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:36.405 15:58:06 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:36.405 15:58:06 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:36.405 15:58:06 -- accel/accel.sh@41 -- # local IFS=, 00:07:36.405 15:58:06 -- accel/accel.sh@42 -- # jq -r . 00:07:36.405 [2024-11-20 15:58:06.840577] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:36.405 [2024-11-20 15:58:06.840643] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196025 ] 00:07:36.405 EAL: No free 2048 kB hugepages reported on node 1 00:07:36.405 [2024-11-20 15:58:06.909466] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.405 [2024-11-20 15:58:06.943211] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val=0x1 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val=crc32c 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@24 -- # accel_opc=crc32c 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val=0 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val=software 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@23 -- # accel_module=software 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val=32 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val=32 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val=1 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val=Yes 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:36.405 15:58:06 -- accel/accel.sh@21 -- # val= 00:07:36.405 15:58:06 -- accel/accel.sh@22 -- # case "$var" in 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # IFS=: 00:07:36.405 15:58:06 -- accel/accel.sh@20 -- # read -r var val 00:07:37.343 15:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.343 15:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.343 15:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.343 15:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.343 15:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.343 15:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.343 15:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.343 15:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.343 15:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.343 15:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.343 15:58:08 -- accel/accel.sh@21 -- # val= 00:07:37.343 15:58:08 -- accel/accel.sh@22 -- # case "$var" in 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # IFS=: 00:07:37.343 15:58:08 -- accel/accel.sh@20 -- # read -r var val 00:07:37.343 15:58:08 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:37.343 15:58:08 -- accel/accel.sh@28 -- # [[ -n crc32c ]] 00:07:37.343 15:58:08 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:37.343 00:07:37.343 real 0m2.583s 00:07:37.343 user 0m2.340s 00:07:37.343 sys 0m0.253s 00:07:37.343 15:58:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:37.343 15:58:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.343 ************************************ 00:07:37.343 END TEST accel_crc32c_C2 00:07:37.343 ************************************ 00:07:37.603 15:58:08 -- accel/accel.sh@95 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:07:37.603 15:58:08 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:37.603 15:58:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:37.603 15:58:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.603 ************************************ 00:07:37.603 START TEST accel_copy 00:07:37.603 ************************************ 00:07:37.603 15:58:08 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy -y 00:07:37.603 15:58:08 -- accel/accel.sh@16 -- # local accel_opc 00:07:37.603 15:58:08 -- accel/accel.sh@17 -- # local accel_module 00:07:37.603 15:58:08 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy -y 00:07:37.603 15:58:08 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:37.603 15:58:08 -- accel/accel.sh@12 -- # build_accel_config 00:07:37.603 15:58:08 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:37.603 15:58:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:37.603 15:58:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:37.603 15:58:08 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:37.603 15:58:08 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:37.603 15:58:08 -- accel/accel.sh@41 -- # local IFS=, 00:07:37.603 15:58:08 -- accel/accel.sh@42 -- # jq -r . 00:07:37.603 [2024-11-20 15:58:08.186923] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:37.603 [2024-11-20 15:58:08.186986] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196312 ] 00:07:37.603 EAL: No free 2048 kB hugepages reported on node 1 00:07:37.603 [2024-11-20 15:58:08.254896] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.603 [2024-11-20 15:58:08.290152] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.983 15:58:09 -- accel/accel.sh@18 -- # out=' 00:07:38.983 SPDK Configuration: 00:07:38.983 Core mask: 0x1 00:07:38.983 00:07:38.983 Accel Perf Configuration: 00:07:38.983 Workload Type: copy 00:07:38.983 Transfer size: 4096 bytes 00:07:38.983 Vector count 1 00:07:38.983 Module: software 00:07:38.983 Queue depth: 32 00:07:38.983 Allocate depth: 32 00:07:38.983 # threads/core: 1 00:07:38.983 Run time: 1 seconds 00:07:38.983 Verify: Yes 00:07:38.983 00:07:38.983 Running for 1 seconds... 00:07:38.983 00:07:38.983 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:38.983 ------------------------------------------------------------------------------------ 00:07:38.983 0,0 450048/s 1758 MiB/s 0 0 00:07:38.983 ==================================================================================== 00:07:38.983 Total 450048/s 1758 MiB/s 0 0' 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:07:38.983 15:58:09 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:07:38.983 15:58:09 -- accel/accel.sh@12 -- # build_accel_config 00:07:38.983 15:58:09 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:38.983 15:58:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:38.983 15:58:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:38.983 15:58:09 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:38.983 15:58:09 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:38.983 15:58:09 -- accel/accel.sh@41 -- # local IFS=, 00:07:38.983 15:58:09 -- accel/accel.sh@42 -- # jq -r . 00:07:38.983 [2024-11-20 15:58:09.480290] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:38.983 [2024-11-20 15:58:09.480356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196580 ] 00:07:38.983 EAL: No free 2048 kB hugepages reported on node 1 00:07:38.983 [2024-11-20 15:58:09.548658] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.983 [2024-11-20 15:58:09.582374] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val=0x1 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val=copy 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@24 -- # accel_opc=copy 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val=software 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@23 -- # accel_module=software 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val=32 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val=32 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val=1 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val=Yes 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:38.983 15:58:09 -- accel/accel.sh@21 -- # val= 00:07:38.983 15:58:09 -- accel/accel.sh@22 -- # case "$var" in 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # IFS=: 00:07:38.983 15:58:09 -- accel/accel.sh@20 -- # read -r var val 00:07:40.361 15:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.361 15:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.361 15:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.361 15:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.361 15:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.361 15:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.361 15:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.361 15:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.361 15:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.361 15:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.361 15:58:10 -- accel/accel.sh@21 -- # val= 00:07:40.361 15:58:10 -- accel/accel.sh@22 -- # case "$var" in 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # IFS=: 00:07:40.361 15:58:10 -- accel/accel.sh@20 -- # read -r var val 00:07:40.361 15:58:10 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:40.361 15:58:10 -- accel/accel.sh@28 -- # [[ -n copy ]] 00:07:40.361 15:58:10 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:40.361 00:07:40.361 real 0m2.591s 00:07:40.361 user 0m2.356s 00:07:40.361 sys 0m0.243s 00:07:40.361 15:58:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:40.361 15:58:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.361 ************************************ 00:07:40.361 END TEST accel_copy 00:07:40.361 ************************************ 00:07:40.361 15:58:10 -- accel/accel.sh@96 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:40.361 15:58:10 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:07:40.361 15:58:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:40.361 15:58:10 -- common/autotest_common.sh@10 -- # set +x 00:07:40.361 ************************************ 00:07:40.361 START TEST accel_fill 00:07:40.361 ************************************ 00:07:40.361 15:58:10 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:40.361 15:58:10 -- accel/accel.sh@16 -- # local accel_opc 00:07:40.361 15:58:10 -- accel/accel.sh@17 -- # local accel_module 00:07:40.361 15:58:10 -- accel/accel.sh@18 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:40.361 15:58:10 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:40.361 15:58:10 -- accel/accel.sh@12 -- # build_accel_config 00:07:40.361 15:58:10 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:40.361 15:58:10 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:40.361 15:58:10 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:40.361 15:58:10 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:40.361 15:58:10 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:40.361 15:58:10 -- accel/accel.sh@41 -- # local IFS=, 00:07:40.361 15:58:10 -- accel/accel.sh@42 -- # jq -r . 00:07:40.361 [2024-11-20 15:58:10.825536] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:40.361 [2024-11-20 15:58:10.825603] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1196861 ] 00:07:40.362 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.362 [2024-11-20 15:58:10.893545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.362 [2024-11-20 15:58:10.928530] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.300 15:58:12 -- accel/accel.sh@18 -- # out=' 00:07:41.300 SPDK Configuration: 00:07:41.300 Core mask: 0x1 00:07:41.300 00:07:41.300 Accel Perf Configuration: 00:07:41.300 Workload Type: fill 00:07:41.300 Fill pattern: 0x80 00:07:41.300 Transfer size: 4096 bytes 00:07:41.300 Vector count 1 00:07:41.300 Module: software 00:07:41.300 Queue depth: 64 00:07:41.300 Allocate depth: 64 00:07:41.300 # threads/core: 1 00:07:41.300 Run time: 1 seconds 00:07:41.300 Verify: Yes 00:07:41.300 00:07:41.300 Running for 1 seconds... 00:07:41.300 00:07:41.300 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:41.300 ------------------------------------------------------------------------------------ 00:07:41.300 0,0 697408/s 2724 MiB/s 0 0 00:07:41.300 ==================================================================================== 00:07:41.300 Total 697408/s 2724 MiB/s 0 0' 00:07:41.300 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.300 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.300 15:58:12 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:41.300 15:58:12 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:41.300 15:58:12 -- accel/accel.sh@12 -- # build_accel_config 00:07:41.300 15:58:12 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:41.300 15:58:12 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:41.300 15:58:12 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:41.300 15:58:12 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:41.300 15:58:12 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:41.300 15:58:12 -- accel/accel.sh@41 -- # local IFS=, 00:07:41.300 15:58:12 -- accel/accel.sh@42 -- # jq -r . 00:07:41.560 [2024-11-20 15:58:12.119832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:41.560 [2024-11-20 15:58:12.119898] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197098 ] 00:07:41.560 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.560 [2024-11-20 15:58:12.188350] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.560 [2024-11-20 15:58:12.222777] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val=0x1 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val=fill 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@24 -- # accel_opc=fill 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val=0x80 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val=software 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@23 -- # accel_module=software 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val=64 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.560 15:58:12 -- accel/accel.sh@21 -- # val=64 00:07:41.560 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.560 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.561 15:58:12 -- accel/accel.sh@21 -- # val=1 00:07:41.561 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.561 15:58:12 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:41.561 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.561 15:58:12 -- accel/accel.sh@21 -- # val=Yes 00:07:41.561 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.561 15:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.561 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:41.561 15:58:12 -- accel/accel.sh@21 -- # val= 00:07:41.561 15:58:12 -- accel/accel.sh@22 -- # case "$var" in 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # IFS=: 00:07:41.561 15:58:12 -- accel/accel.sh@20 -- # read -r var val 00:07:42.939 15:58:13 -- accel/accel.sh@21 -- # val= 00:07:42.939 15:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.939 15:58:13 -- accel/accel.sh@21 -- # val= 00:07:42.939 15:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.939 15:58:13 -- accel/accel.sh@21 -- # val= 00:07:42.939 15:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.939 15:58:13 -- accel/accel.sh@21 -- # val= 00:07:42.939 15:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.939 15:58:13 -- accel/accel.sh@21 -- # val= 00:07:42.939 15:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.939 15:58:13 -- accel/accel.sh@21 -- # val= 00:07:42.939 15:58:13 -- accel/accel.sh@22 -- # case "$var" in 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # IFS=: 00:07:42.939 15:58:13 -- accel/accel.sh@20 -- # read -r var val 00:07:42.939 15:58:13 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:42.939 15:58:13 -- accel/accel.sh@28 -- # [[ -n fill ]] 00:07:42.939 15:58:13 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:42.939 00:07:42.939 real 0m2.592s 00:07:42.939 user 0m2.345s 00:07:42.939 sys 0m0.257s 00:07:42.939 15:58:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:42.939 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.939 ************************************ 00:07:42.939 END TEST accel_fill 00:07:42.939 ************************************ 00:07:42.939 15:58:13 -- accel/accel.sh@97 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:42.939 15:58:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:42.939 15:58:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:42.939 15:58:13 -- common/autotest_common.sh@10 -- # set +x 00:07:42.939 ************************************ 00:07:42.939 START TEST accel_copy_crc32c 00:07:42.939 ************************************ 00:07:42.939 15:58:13 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y 00:07:42.939 15:58:13 -- accel/accel.sh@16 -- # local accel_opc 00:07:42.939 15:58:13 -- accel/accel.sh@17 -- # local accel_module 00:07:42.939 15:58:13 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:42.939 15:58:13 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:42.939 15:58:13 -- accel/accel.sh@12 -- # build_accel_config 00:07:42.939 15:58:13 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:42.939 15:58:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:42.939 15:58:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:42.939 15:58:13 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:42.939 15:58:13 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:42.939 15:58:13 -- accel/accel.sh@41 -- # local IFS=, 00:07:42.939 15:58:13 -- accel/accel.sh@42 -- # jq -r . 00:07:42.939 [2024-11-20 15:58:13.465812] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:42.939 [2024-11-20 15:58:13.465876] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197289 ] 00:07:42.939 EAL: No free 2048 kB hugepages reported on node 1 00:07:42.939 [2024-11-20 15:58:13.535401] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.939 [2024-11-20 15:58:13.570642] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.314 15:58:14 -- accel/accel.sh@18 -- # out=' 00:07:44.314 SPDK Configuration: 00:07:44.314 Core mask: 0x1 00:07:44.314 00:07:44.314 Accel Perf Configuration: 00:07:44.314 Workload Type: copy_crc32c 00:07:44.314 CRC-32C seed: 0 00:07:44.314 Vector size: 4096 bytes 00:07:44.314 Transfer size: 4096 bytes 00:07:44.314 Vector count 1 00:07:44.314 Module: software 00:07:44.314 Queue depth: 32 00:07:44.314 Allocate depth: 32 00:07:44.314 # threads/core: 1 00:07:44.314 Run time: 1 seconds 00:07:44.314 Verify: Yes 00:07:44.314 00:07:44.314 Running for 1 seconds... 00:07:44.314 00:07:44.314 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:44.314 ------------------------------------------------------------------------------------ 00:07:44.314 0,0 345632/s 1350 MiB/s 0 0 00:07:44.314 ==================================================================================== 00:07:44.314 Total 345632/s 1350 MiB/s 0 0' 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:44.314 15:58:14 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:44.314 15:58:14 -- accel/accel.sh@12 -- # build_accel_config 00:07:44.314 15:58:14 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:44.314 15:58:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:44.314 15:58:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:44.314 15:58:14 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:44.314 15:58:14 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:44.314 15:58:14 -- accel/accel.sh@41 -- # local IFS=, 00:07:44.314 15:58:14 -- accel/accel.sh@42 -- # jq -r . 00:07:44.314 [2024-11-20 15:58:14.760431] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:44.314 [2024-11-20 15:58:14.760496] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197447 ] 00:07:44.314 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.314 [2024-11-20 15:58:14.830569] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.314 [2024-11-20 15:58:14.867120] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val= 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val= 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val=0x1 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val= 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val= 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val=0 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val= 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val=software 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@23 -- # accel_module=software 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val=32 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val=32 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val=1 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val=Yes 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val= 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:44.314 15:58:14 -- accel/accel.sh@21 -- # val= 00:07:44.314 15:58:14 -- accel/accel.sh@22 -- # case "$var" in 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # IFS=: 00:07:44.314 15:58:14 -- accel/accel.sh@20 -- # read -r var val 00:07:45.250 15:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.250 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.250 15:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.250 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.250 15:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.250 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.250 15:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.250 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.250 15:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.250 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.250 15:58:16 -- accel/accel.sh@21 -- # val= 00:07:45.250 15:58:16 -- accel/accel.sh@22 -- # case "$var" in 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # IFS=: 00:07:45.250 15:58:16 -- accel/accel.sh@20 -- # read -r var val 00:07:45.250 15:58:16 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:45.250 15:58:16 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:45.250 15:58:16 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:45.250 00:07:45.250 real 0m2.598s 00:07:45.250 user 0m2.359s 00:07:45.250 sys 0m0.249s 00:07:45.250 15:58:16 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:45.250 15:58:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.250 ************************************ 00:07:45.250 END TEST accel_copy_crc32c 00:07:45.250 ************************************ 00:07:45.509 15:58:16 -- accel/accel.sh@98 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:45.509 15:58:16 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:45.509 15:58:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:45.509 15:58:16 -- common/autotest_common.sh@10 -- # set +x 00:07:45.509 ************************************ 00:07:45.509 START TEST accel_copy_crc32c_C2 00:07:45.509 ************************************ 00:07:45.509 15:58:16 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:45.509 15:58:16 -- accel/accel.sh@16 -- # local accel_opc 00:07:45.509 15:58:16 -- accel/accel.sh@17 -- # local accel_module 00:07:45.509 15:58:16 -- accel/accel.sh@18 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:45.509 15:58:16 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:45.509 15:58:16 -- accel/accel.sh@12 -- # build_accel_config 00:07:45.509 15:58:16 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:45.509 15:58:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:45.509 15:58:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:45.509 15:58:16 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:45.509 15:58:16 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:45.509 15:58:16 -- accel/accel.sh@41 -- # local IFS=, 00:07:45.509 15:58:16 -- accel/accel.sh@42 -- # jq -r . 00:07:45.509 [2024-11-20 15:58:16.112592] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:45.509 [2024-11-20 15:58:16.112656] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197725 ] 00:07:45.509 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.509 [2024-11-20 15:58:16.182480] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.509 [2024-11-20 15:58:16.217968] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.888 15:58:17 -- accel/accel.sh@18 -- # out=' 00:07:46.888 SPDK Configuration: 00:07:46.888 Core mask: 0x1 00:07:46.888 00:07:46.888 Accel Perf Configuration: 00:07:46.888 Workload Type: copy_crc32c 00:07:46.888 CRC-32C seed: 0 00:07:46.888 Vector size: 4096 bytes 00:07:46.888 Transfer size: 8192 bytes 00:07:46.888 Vector count 2 00:07:46.888 Module: software 00:07:46.888 Queue depth: 32 00:07:46.888 Allocate depth: 32 00:07:46.888 # threads/core: 1 00:07:46.888 Run time: 1 seconds 00:07:46.888 Verify: Yes 00:07:46.888 00:07:46.888 Running for 1 seconds... 00:07:46.888 00:07:46.888 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:46.888 ------------------------------------------------------------------------------------ 00:07:46.888 0,0 247200/s 1931 MiB/s 0 0 00:07:46.888 ==================================================================================== 00:07:46.888 Total 247200/s 965 MiB/s 0 0' 00:07:46.888 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.888 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.888 15:58:17 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:46.888 15:58:17 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:46.888 15:58:17 -- accel/accel.sh@12 -- # build_accel_config 00:07:46.888 15:58:17 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:46.888 15:58:17 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:46.888 15:58:17 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:46.888 15:58:17 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:46.889 15:58:17 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:46.889 15:58:17 -- accel/accel.sh@41 -- # local IFS=, 00:07:46.889 15:58:17 -- accel/accel.sh@42 -- # jq -r . 00:07:46.889 [2024-11-20 15:58:17.408199] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:46.889 [2024-11-20 15:58:17.408263] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1197997 ] 00:07:46.889 EAL: No free 2048 kB hugepages reported on node 1 00:07:46.889 [2024-11-20 15:58:17.477542] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:46.889 [2024-11-20 15:58:17.511845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val= 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val= 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val=0x1 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val= 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val= 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val=copy_crc32c 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@24 -- # accel_opc=copy_crc32c 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val=0 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val='8192 bytes' 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val= 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val=software 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@23 -- # accel_module=software 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val=32 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val=32 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val=1 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val=Yes 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val= 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:46.889 15:58:17 -- accel/accel.sh@21 -- # val= 00:07:46.889 15:58:17 -- accel/accel.sh@22 -- # case "$var" in 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # IFS=: 00:07:46.889 15:58:17 -- accel/accel.sh@20 -- # read -r var val 00:07:48.269 15:58:18 -- accel/accel.sh@21 -- # val= 00:07:48.269 15:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:48.269 15:58:18 -- accel/accel.sh@21 -- # val= 00:07:48.269 15:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:48.269 15:58:18 -- accel/accel.sh@21 -- # val= 00:07:48.269 15:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:48.269 15:58:18 -- accel/accel.sh@21 -- # val= 00:07:48.269 15:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:48.269 15:58:18 -- accel/accel.sh@21 -- # val= 00:07:48.269 15:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:48.269 15:58:18 -- accel/accel.sh@21 -- # val= 00:07:48.269 15:58:18 -- accel/accel.sh@22 -- # case "$var" in 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # IFS=: 00:07:48.269 15:58:18 -- accel/accel.sh@20 -- # read -r var val 00:07:48.269 15:58:18 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:48.269 15:58:18 -- accel/accel.sh@28 -- # [[ -n copy_crc32c ]] 00:07:48.269 15:58:18 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:48.269 00:07:48.269 real 0m2.597s 00:07:48.269 user 0m2.358s 00:07:48.269 sys 0m0.248s 00:07:48.269 15:58:18 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:48.269 15:58:18 -- common/autotest_common.sh@10 -- # set +x 00:07:48.269 ************************************ 00:07:48.269 END TEST accel_copy_crc32c_C2 00:07:48.269 ************************************ 00:07:48.269 15:58:18 -- accel/accel.sh@99 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:48.269 15:58:18 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:48.269 15:58:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:48.269 15:58:18 -- common/autotest_common.sh@10 -- # set +x 00:07:48.269 ************************************ 00:07:48.269 START TEST accel_dualcast 00:07:48.269 ************************************ 00:07:48.269 15:58:18 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dualcast -y 00:07:48.269 15:58:18 -- accel/accel.sh@16 -- # local accel_opc 00:07:48.269 15:58:18 -- accel/accel.sh@17 -- # local accel_module 00:07:48.269 15:58:18 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dualcast -y 00:07:48.269 15:58:18 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:48.269 15:58:18 -- accel/accel.sh@12 -- # build_accel_config 00:07:48.269 15:58:18 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:48.269 15:58:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:48.269 15:58:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:48.269 15:58:18 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:48.269 15:58:18 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:48.269 15:58:18 -- accel/accel.sh@41 -- # local IFS=, 00:07:48.269 15:58:18 -- accel/accel.sh@42 -- # jq -r . 00:07:48.269 [2024-11-20 15:58:18.753611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:48.269 [2024-11-20 15:58:18.753673] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198280 ] 00:07:48.269 EAL: No free 2048 kB hugepages reported on node 1 00:07:48.269 [2024-11-20 15:58:18.822099] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:48.269 [2024-11-20 15:58:18.857181] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.649 15:58:20 -- accel/accel.sh@18 -- # out=' 00:07:49.649 SPDK Configuration: 00:07:49.649 Core mask: 0x1 00:07:49.649 00:07:49.649 Accel Perf Configuration: 00:07:49.649 Workload Type: dualcast 00:07:49.649 Transfer size: 4096 bytes 00:07:49.649 Vector count 1 00:07:49.649 Module: software 00:07:49.649 Queue depth: 32 00:07:49.649 Allocate depth: 32 00:07:49.649 # threads/core: 1 00:07:49.649 Run time: 1 seconds 00:07:49.649 Verify: Yes 00:07:49.649 00:07:49.649 Running for 1 seconds... 00:07:49.649 00:07:49.649 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:49.649 ------------------------------------------------------------------------------------ 00:07:49.649 0,0 525824/s 2054 MiB/s 0 0 00:07:49.649 ==================================================================================== 00:07:49.649 Total 525824/s 2054 MiB/s 0 0' 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:49.649 15:58:20 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:49.649 15:58:20 -- accel/accel.sh@12 -- # build_accel_config 00:07:49.649 15:58:20 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:49.649 15:58:20 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:49.649 15:58:20 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:49.649 15:58:20 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:49.649 15:58:20 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:49.649 15:58:20 -- accel/accel.sh@41 -- # local IFS=, 00:07:49.649 15:58:20 -- accel/accel.sh@42 -- # jq -r . 00:07:49.649 [2024-11-20 15:58:20.048566] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:49.649 [2024-11-20 15:58:20.048632] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198547 ] 00:07:49.649 EAL: No free 2048 kB hugepages reported on node 1 00:07:49.649 [2024-11-20 15:58:20.118738] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.649 [2024-11-20 15:58:20.155307] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val= 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val= 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val=0x1 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val= 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val= 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val=dualcast 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@24 -- # accel_opc=dualcast 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val= 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val=software 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@23 -- # accel_module=software 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val=32 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val=32 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val=1 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val=Yes 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val= 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:49.649 15:58:20 -- accel/accel.sh@21 -- # val= 00:07:49.649 15:58:20 -- accel/accel.sh@22 -- # case "$var" in 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # IFS=: 00:07:49.649 15:58:20 -- accel/accel.sh@20 -- # read -r var val 00:07:50.588 15:58:21 -- accel/accel.sh@21 -- # val= 00:07:50.588 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:50.588 15:58:21 -- accel/accel.sh@21 -- # val= 00:07:50.588 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:50.588 15:58:21 -- accel/accel.sh@21 -- # val= 00:07:50.588 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:50.588 15:58:21 -- accel/accel.sh@21 -- # val= 00:07:50.588 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:50.588 15:58:21 -- accel/accel.sh@21 -- # val= 00:07:50.588 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:50.588 15:58:21 -- accel/accel.sh@21 -- # val= 00:07:50.588 15:58:21 -- accel/accel.sh@22 -- # case "$var" in 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # IFS=: 00:07:50.588 15:58:21 -- accel/accel.sh@20 -- # read -r var val 00:07:50.588 15:58:21 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:50.588 15:58:21 -- accel/accel.sh@28 -- # [[ -n dualcast ]] 00:07:50.588 15:58:21 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:50.588 00:07:50.588 real 0m2.599s 00:07:50.588 user 0m2.357s 00:07:50.588 sys 0m0.250s 00:07:50.588 15:58:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:50.588 15:58:21 -- common/autotest_common.sh@10 -- # set +x 00:07:50.589 ************************************ 00:07:50.589 END TEST accel_dualcast 00:07:50.589 ************************************ 00:07:50.589 15:58:21 -- accel/accel.sh@100 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:50.589 15:58:21 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:50.589 15:58:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:50.589 15:58:21 -- common/autotest_common.sh@10 -- # set +x 00:07:50.589 ************************************ 00:07:50.589 START TEST accel_compare 00:07:50.589 ************************************ 00:07:50.589 15:58:21 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compare -y 00:07:50.589 15:58:21 -- accel/accel.sh@16 -- # local accel_opc 00:07:50.589 15:58:21 -- accel/accel.sh@17 -- # local accel_module 00:07:50.589 15:58:21 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compare -y 00:07:50.589 15:58:21 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:50.589 15:58:21 -- accel/accel.sh@12 -- # build_accel_config 00:07:50.589 15:58:21 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:50.589 15:58:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:50.589 15:58:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:50.589 15:58:21 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:50.589 15:58:21 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:50.589 15:58:21 -- accel/accel.sh@41 -- # local IFS=, 00:07:50.589 15:58:21 -- accel/accel.sh@42 -- # jq -r . 00:07:50.848 [2024-11-20 15:58:21.402751] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:50.848 [2024-11-20 15:58:21.402817] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198795 ] 00:07:50.848 EAL: No free 2048 kB hugepages reported on node 1 00:07:50.848 [2024-11-20 15:58:21.472090] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.848 [2024-11-20 15:58:21.507769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.228 15:58:22 -- accel/accel.sh@18 -- # out=' 00:07:52.228 SPDK Configuration: 00:07:52.228 Core mask: 0x1 00:07:52.228 00:07:52.228 Accel Perf Configuration: 00:07:52.228 Workload Type: compare 00:07:52.228 Transfer size: 4096 bytes 00:07:52.228 Vector count 1 00:07:52.228 Module: software 00:07:52.228 Queue depth: 32 00:07:52.228 Allocate depth: 32 00:07:52.228 # threads/core: 1 00:07:52.228 Run time: 1 seconds 00:07:52.228 Verify: Yes 00:07:52.228 00:07:52.228 Running for 1 seconds... 00:07:52.228 00:07:52.228 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:52.228 ------------------------------------------------------------------------------------ 00:07:52.228 0,0 638272/s 2493 MiB/s 0 0 00:07:52.228 ==================================================================================== 00:07:52.228 Total 638272/s 2493 MiB/s 0 0' 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.228 15:58:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:52.228 15:58:22 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:52.228 15:58:22 -- accel/accel.sh@12 -- # build_accel_config 00:07:52.228 15:58:22 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:52.228 15:58:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:52.228 15:58:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:52.228 15:58:22 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:52.228 15:58:22 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:52.228 15:58:22 -- accel/accel.sh@41 -- # local IFS=, 00:07:52.228 15:58:22 -- accel/accel.sh@42 -- # jq -r . 00:07:52.228 [2024-11-20 15:58:22.700260] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:52.228 [2024-11-20 15:58:22.700334] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1198938 ] 00:07:52.228 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.228 [2024-11-20 15:58:22.770746] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.228 [2024-11-20 15:58:22.805414] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.228 15:58:22 -- accel/accel.sh@21 -- # val= 00:07:52.228 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.228 15:58:22 -- accel/accel.sh@21 -- # val= 00:07:52.228 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.228 15:58:22 -- accel/accel.sh@21 -- # val=0x1 00:07:52.228 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.228 15:58:22 -- accel/accel.sh@21 -- # val= 00:07:52.228 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.228 15:58:22 -- accel/accel.sh@21 -- # val= 00:07:52.228 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.228 15:58:22 -- accel/accel.sh@21 -- # val=compare 00:07:52.228 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.228 15:58:22 -- accel/accel.sh@24 -- # accel_opc=compare 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.228 15:58:22 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:52.228 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.228 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.229 15:58:22 -- accel/accel.sh@21 -- # val= 00:07:52.229 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.229 15:58:22 -- accel/accel.sh@21 -- # val=software 00:07:52.229 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.229 15:58:22 -- accel/accel.sh@23 -- # accel_module=software 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.229 15:58:22 -- accel/accel.sh@21 -- # val=32 00:07:52.229 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.229 15:58:22 -- accel/accel.sh@21 -- # val=32 00:07:52.229 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.229 15:58:22 -- accel/accel.sh@21 -- # val=1 00:07:52.229 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.229 15:58:22 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:52.229 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.229 15:58:22 -- accel/accel.sh@21 -- # val=Yes 00:07:52.229 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.229 15:58:22 -- accel/accel.sh@21 -- # val= 00:07:52.229 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:52.229 15:58:22 -- accel/accel.sh@21 -- # val= 00:07:52.229 15:58:22 -- accel/accel.sh@22 -- # case "$var" in 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # IFS=: 00:07:52.229 15:58:22 -- accel/accel.sh@20 -- # read -r var val 00:07:53.168 15:58:23 -- accel/accel.sh@21 -- # val= 00:07:53.168 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.168 15:58:23 -- accel/accel.sh@21 -- # val= 00:07:53.168 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.168 15:58:23 -- accel/accel.sh@21 -- # val= 00:07:53.168 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.168 15:58:23 -- accel/accel.sh@21 -- # val= 00:07:53.168 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.168 15:58:23 -- accel/accel.sh@21 -- # val= 00:07:53.168 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.168 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.427 15:58:23 -- accel/accel.sh@21 -- # val= 00:07:53.427 15:58:23 -- accel/accel.sh@22 -- # case "$var" in 00:07:53.427 15:58:23 -- accel/accel.sh@20 -- # IFS=: 00:07:53.427 15:58:23 -- accel/accel.sh@20 -- # read -r var val 00:07:53.427 15:58:23 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:53.427 15:58:23 -- accel/accel.sh@28 -- # [[ -n compare ]] 00:07:53.427 15:58:23 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:53.427 00:07:53.427 real 0m2.599s 00:07:53.427 user 0m2.351s 00:07:53.427 sys 0m0.256s 00:07:53.427 15:58:23 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:53.427 15:58:23 -- common/autotest_common.sh@10 -- # set +x 00:07:53.427 ************************************ 00:07:53.427 END TEST accel_compare 00:07:53.427 ************************************ 00:07:53.427 15:58:24 -- accel/accel.sh@101 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:53.427 15:58:24 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:07:53.427 15:58:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:53.427 15:58:24 -- common/autotest_common.sh@10 -- # set +x 00:07:53.427 ************************************ 00:07:53.427 START TEST accel_xor 00:07:53.427 ************************************ 00:07:53.428 15:58:24 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y 00:07:53.428 15:58:24 -- accel/accel.sh@16 -- # local accel_opc 00:07:53.428 15:58:24 -- accel/accel.sh@17 -- # local accel_module 00:07:53.428 15:58:24 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y 00:07:53.428 15:58:24 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:53.428 15:58:24 -- accel/accel.sh@12 -- # build_accel_config 00:07:53.428 15:58:24 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:53.428 15:58:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:53.428 15:58:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:53.428 15:58:24 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:53.428 15:58:24 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:53.428 15:58:24 -- accel/accel.sh@41 -- # local IFS=, 00:07:53.428 15:58:24 -- accel/accel.sh@42 -- # jq -r . 00:07:53.428 [2024-11-20 15:58:24.050957] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:53.428 [2024-11-20 15:58:24.051025] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199146 ] 00:07:53.428 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.428 [2024-11-20 15:58:24.122163] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.428 [2024-11-20 15:58:24.157899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.808 15:58:25 -- accel/accel.sh@18 -- # out=' 00:07:54.808 SPDK Configuration: 00:07:54.808 Core mask: 0x1 00:07:54.808 00:07:54.808 Accel Perf Configuration: 00:07:54.808 Workload Type: xor 00:07:54.808 Source buffers: 2 00:07:54.808 Transfer size: 4096 bytes 00:07:54.808 Vector count 1 00:07:54.808 Module: software 00:07:54.808 Queue depth: 32 00:07:54.808 Allocate depth: 32 00:07:54.808 # threads/core: 1 00:07:54.808 Run time: 1 seconds 00:07:54.808 Verify: Yes 00:07:54.808 00:07:54.808 Running for 1 seconds... 00:07:54.808 00:07:54.808 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:54.808 ------------------------------------------------------------------------------------ 00:07:54.808 0,0 496544/s 1939 MiB/s 0 0 00:07:54.808 ==================================================================================== 00:07:54.808 Total 496544/s 1939 MiB/s 0 0' 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:54.808 15:58:25 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:54.808 15:58:25 -- accel/accel.sh@12 -- # build_accel_config 00:07:54.808 15:58:25 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:54.808 15:58:25 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:54.808 15:58:25 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:54.808 15:58:25 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:54.808 15:58:25 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:54.808 15:58:25 -- accel/accel.sh@41 -- # local IFS=, 00:07:54.808 15:58:25 -- accel/accel.sh@42 -- # jq -r . 00:07:54.808 [2024-11-20 15:58:25.350650] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:54.808 [2024-11-20 15:58:25.350734] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199410 ] 00:07:54.808 EAL: No free 2048 kB hugepages reported on node 1 00:07:54.808 [2024-11-20 15:58:25.421839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.808 [2024-11-20 15:58:25.456075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val= 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val= 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val=0x1 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val= 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val= 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val=xor 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val=2 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val= 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val=software 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@23 -- # accel_module=software 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val=32 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val=32 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val=1 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val=Yes 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val= 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:54.808 15:58:25 -- accel/accel.sh@21 -- # val= 00:07:54.808 15:58:25 -- accel/accel.sh@22 -- # case "$var" in 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # IFS=: 00:07:54.808 15:58:25 -- accel/accel.sh@20 -- # read -r var val 00:07:56.189 15:58:26 -- accel/accel.sh@21 -- # val= 00:07:56.189 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:56.189 15:58:26 -- accel/accel.sh@21 -- # val= 00:07:56.189 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:56.189 15:58:26 -- accel/accel.sh@21 -- # val= 00:07:56.189 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:56.189 15:58:26 -- accel/accel.sh@21 -- # val= 00:07:56.189 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:56.189 15:58:26 -- accel/accel.sh@21 -- # val= 00:07:56.189 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:56.189 15:58:26 -- accel/accel.sh@21 -- # val= 00:07:56.189 15:58:26 -- accel/accel.sh@22 -- # case "$var" in 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # IFS=: 00:07:56.189 15:58:26 -- accel/accel.sh@20 -- # read -r var val 00:07:56.189 15:58:26 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:56.189 15:58:26 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:56.189 15:58:26 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:56.189 00:07:56.189 real 0m2.602s 00:07:56.189 user 0m2.336s 00:07:56.189 sys 0m0.274s 00:07:56.189 15:58:26 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:56.189 15:58:26 -- common/autotest_common.sh@10 -- # set +x 00:07:56.189 ************************************ 00:07:56.189 END TEST accel_xor 00:07:56.189 ************************************ 00:07:56.189 15:58:26 -- accel/accel.sh@102 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:56.189 15:58:26 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:07:56.189 15:58:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:56.189 15:58:26 -- common/autotest_common.sh@10 -- # set +x 00:07:56.189 ************************************ 00:07:56.189 START TEST accel_xor 00:07:56.189 ************************************ 00:07:56.189 15:58:26 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w xor -y -x 3 00:07:56.189 15:58:26 -- accel/accel.sh@16 -- # local accel_opc 00:07:56.189 15:58:26 -- accel/accel.sh@17 -- # local accel_module 00:07:56.189 15:58:26 -- accel/accel.sh@18 -- # accel_perf -t 1 -w xor -y -x 3 00:07:56.189 15:58:26 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:56.189 15:58:26 -- accel/accel.sh@12 -- # build_accel_config 00:07:56.189 15:58:26 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:56.189 15:58:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:56.189 15:58:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:56.189 15:58:26 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:56.189 15:58:26 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:56.189 15:58:26 -- accel/accel.sh@41 -- # local IFS=, 00:07:56.189 15:58:26 -- accel/accel.sh@42 -- # jq -r . 00:07:56.189 [2024-11-20 15:58:26.700077] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:56.189 [2024-11-20 15:58:26.700158] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199699 ] 00:07:56.189 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.189 [2024-11-20 15:58:26.770110] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:56.189 [2024-11-20 15:58:26.805315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.570 15:58:27 -- accel/accel.sh@18 -- # out=' 00:07:57.570 SPDK Configuration: 00:07:57.570 Core mask: 0x1 00:07:57.570 00:07:57.570 Accel Perf Configuration: 00:07:57.570 Workload Type: xor 00:07:57.570 Source buffers: 3 00:07:57.570 Transfer size: 4096 bytes 00:07:57.570 Vector count 1 00:07:57.570 Module: software 00:07:57.570 Queue depth: 32 00:07:57.570 Allocate depth: 32 00:07:57.570 # threads/core: 1 00:07:57.570 Run time: 1 seconds 00:07:57.570 Verify: Yes 00:07:57.570 00:07:57.570 Running for 1 seconds... 00:07:57.570 00:07:57.570 Core,Thread Transfers Bandwidth Failed Miscompares 00:07:57.570 ------------------------------------------------------------------------------------ 00:07:57.570 0,0 469888/s 1835 MiB/s 0 0 00:07:57.570 ==================================================================================== 00:07:57.570 Total 469888/s 1835 MiB/s 0 0' 00:07:57.570 15:58:27 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:27 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:27 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:57.570 15:58:27 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:57.570 15:58:27 -- accel/accel.sh@12 -- # build_accel_config 00:07:57.570 15:58:27 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:57.570 15:58:27 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:57.570 15:58:27 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:57.570 15:58:27 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:57.570 15:58:27 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:57.570 15:58:27 -- accel/accel.sh@41 -- # local IFS=, 00:07:57.570 15:58:27 -- accel/accel.sh@42 -- # jq -r . 00:07:57.570 [2024-11-20 15:58:27.996176] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:57.570 [2024-11-20 15:58:27.996242] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1199965 ] 00:07:57.570 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.570 [2024-11-20 15:58:28.064963] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.570 [2024-11-20 15:58:28.098970] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val= 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val= 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val=0x1 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val= 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val= 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val=xor 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@24 -- # accel_opc=xor 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val=3 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val='4096 bytes' 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val= 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val=software 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@23 -- # accel_module=software 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val=32 00:07:57.570 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.570 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.570 15:58:28 -- accel/accel.sh@21 -- # val=32 00:07:57.571 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.571 15:58:28 -- accel/accel.sh@21 -- # val=1 00:07:57.571 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.571 15:58:28 -- accel/accel.sh@21 -- # val='1 seconds' 00:07:57.571 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.571 15:58:28 -- accel/accel.sh@21 -- # val=Yes 00:07:57.571 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.571 15:58:28 -- accel/accel.sh@21 -- # val= 00:07:57.571 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:57.571 15:58:28 -- accel/accel.sh@21 -- # val= 00:07:57.571 15:58:28 -- accel/accel.sh@22 -- # case "$var" in 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # IFS=: 00:07:57.571 15:58:28 -- accel/accel.sh@20 -- # read -r var val 00:07:58.509 15:58:29 -- accel/accel.sh@21 -- # val= 00:07:58.509 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:58.509 15:58:29 -- accel/accel.sh@21 -- # val= 00:07:58.509 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:58.509 15:58:29 -- accel/accel.sh@21 -- # val= 00:07:58.509 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:58.509 15:58:29 -- accel/accel.sh@21 -- # val= 00:07:58.509 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:58.509 15:58:29 -- accel/accel.sh@21 -- # val= 00:07:58.509 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:58.509 15:58:29 -- accel/accel.sh@21 -- # val= 00:07:58.509 15:58:29 -- accel/accel.sh@22 -- # case "$var" in 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # IFS=: 00:07:58.509 15:58:29 -- accel/accel.sh@20 -- # read -r var val 00:07:58.509 15:58:29 -- accel/accel.sh@28 -- # [[ -n software ]] 00:07:58.509 15:58:29 -- accel/accel.sh@28 -- # [[ -n xor ]] 00:07:58.509 15:58:29 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:58.509 00:07:58.509 real 0m2.594s 00:07:58.509 user 0m2.356s 00:07:58.509 sys 0m0.248s 00:07:58.509 15:58:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:07:58.509 15:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.509 ************************************ 00:07:58.509 END TEST accel_xor 00:07:58.509 ************************************ 00:07:58.509 15:58:29 -- accel/accel.sh@103 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:58.509 15:58:29 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:07:58.509 15:58:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:07:58.509 15:58:29 -- common/autotest_common.sh@10 -- # set +x 00:07:58.771 ************************************ 00:07:58.771 START TEST accel_dif_verify 00:07:58.771 ************************************ 00:07:58.771 15:58:29 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_verify 00:07:58.771 15:58:29 -- accel/accel.sh@16 -- # local accel_opc 00:07:58.771 15:58:29 -- accel/accel.sh@17 -- # local accel_module 00:07:58.771 15:58:29 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_verify 00:07:58.771 15:58:29 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:58.771 15:58:29 -- accel/accel.sh@12 -- # build_accel_config 00:07:58.771 15:58:29 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:07:58.771 15:58:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:58.771 15:58:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:58.771 15:58:29 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:07:58.771 15:58:29 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:07:58.771 15:58:29 -- accel/accel.sh@41 -- # local IFS=, 00:07:58.771 15:58:29 -- accel/accel.sh@42 -- # jq -r . 00:07:58.771 [2024-11-20 15:58:29.342650] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:07:58.771 [2024-11-20 15:58:29.342719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200246 ] 00:07:58.771 EAL: No free 2048 kB hugepages reported on node 1 00:07:58.771 [2024-11-20 15:58:29.412072] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:58.771 [2024-11-20 15:58:29.447085] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.213 15:58:30 -- accel/accel.sh@18 -- # out=' 00:08:00.213 SPDK Configuration: 00:08:00.213 Core mask: 0x1 00:08:00.213 00:08:00.213 Accel Perf Configuration: 00:08:00.213 Workload Type: dif_verify 00:08:00.213 Vector size: 4096 bytes 00:08:00.213 Transfer size: 4096 bytes 00:08:00.213 Block size: 512 bytes 00:08:00.213 Metadata size: 8 bytes 00:08:00.213 Vector count 1 00:08:00.213 Module: software 00:08:00.213 Queue depth: 32 00:08:00.213 Allocate depth: 32 00:08:00.213 # threads/core: 1 00:08:00.213 Run time: 1 seconds 00:08:00.213 Verify: No 00:08:00.213 00:08:00.213 Running for 1 seconds... 00:08:00.213 00:08:00.213 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:00.213 ------------------------------------------------------------------------------------ 00:08:00.213 0,0 136096/s 539 MiB/s 0 0 00:08:00.213 ==================================================================================== 00:08:00.213 Total 136096/s 531 MiB/s 0 0' 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:08:00.213 15:58:30 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:08:00.213 15:58:30 -- accel/accel.sh@12 -- # build_accel_config 00:08:00.213 15:58:30 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:00.213 15:58:30 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:00.213 15:58:30 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:00.213 15:58:30 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:00.213 15:58:30 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:00.213 15:58:30 -- accel/accel.sh@41 -- # local IFS=, 00:08:00.213 15:58:30 -- accel/accel.sh@42 -- # jq -r . 00:08:00.213 [2024-11-20 15:58:30.639885] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:00.213 [2024-11-20 15:58:30.639961] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200467 ] 00:08:00.213 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.213 [2024-11-20 15:58:30.710850] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.213 [2024-11-20 15:58:30.745794] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val= 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val= 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val=0x1 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val= 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val= 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val=dif_verify 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@24 -- # accel_opc=dif_verify 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val= 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val=software 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@23 -- # accel_module=software 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val=32 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val=32 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val=1 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val=No 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val= 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:00.213 15:58:30 -- accel/accel.sh@21 -- # val= 00:08:00.213 15:58:30 -- accel/accel.sh@22 -- # case "$var" in 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # IFS=: 00:08:00.213 15:58:30 -- accel/accel.sh@20 -- # read -r var val 00:08:01.152 15:58:31 -- accel/accel.sh@21 -- # val= 00:08:01.152 15:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # IFS=: 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # read -r var val 00:08:01.152 15:58:31 -- accel/accel.sh@21 -- # val= 00:08:01.152 15:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # IFS=: 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # read -r var val 00:08:01.152 15:58:31 -- accel/accel.sh@21 -- # val= 00:08:01.152 15:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # IFS=: 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # read -r var val 00:08:01.152 15:58:31 -- accel/accel.sh@21 -- # val= 00:08:01.152 15:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # IFS=: 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # read -r var val 00:08:01.152 15:58:31 -- accel/accel.sh@21 -- # val= 00:08:01.152 15:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # IFS=: 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # read -r var val 00:08:01.152 15:58:31 -- accel/accel.sh@21 -- # val= 00:08:01.152 15:58:31 -- accel/accel.sh@22 -- # case "$var" in 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # IFS=: 00:08:01.152 15:58:31 -- accel/accel.sh@20 -- # read -r var val 00:08:01.152 15:58:31 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:01.152 15:58:31 -- accel/accel.sh@28 -- # [[ -n dif_verify ]] 00:08:01.152 15:58:31 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:01.152 00:08:01.152 real 0m2.599s 00:08:01.152 user 0m2.343s 00:08:01.152 sys 0m0.267s 00:08:01.152 15:58:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:01.152 15:58:31 -- common/autotest_common.sh@10 -- # set +x 00:08:01.152 ************************************ 00:08:01.152 END TEST accel_dif_verify 00:08:01.152 ************************************ 00:08:01.411 15:58:31 -- accel/accel.sh@104 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:08:01.411 15:58:31 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:01.411 15:58:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:01.411 15:58:31 -- common/autotest_common.sh@10 -- # set +x 00:08:01.411 ************************************ 00:08:01.411 START TEST accel_dif_generate 00:08:01.411 ************************************ 00:08:01.411 15:58:31 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate 00:08:01.411 15:58:31 -- accel/accel.sh@16 -- # local accel_opc 00:08:01.411 15:58:31 -- accel/accel.sh@17 -- # local accel_module 00:08:01.411 15:58:31 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate 00:08:01.411 15:58:31 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:01.411 15:58:31 -- accel/accel.sh@12 -- # build_accel_config 00:08:01.411 15:58:31 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:01.411 15:58:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:01.411 15:58:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:01.412 15:58:31 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:01.412 15:58:31 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:01.412 15:58:31 -- accel/accel.sh@41 -- # local IFS=, 00:08:01.412 15:58:31 -- accel/accel.sh@42 -- # jq -r . 00:08:01.412 [2024-11-20 15:58:31.991498] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:01.412 [2024-11-20 15:58:31.991568] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200658 ] 00:08:01.412 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.412 [2024-11-20 15:58:32.061741] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.412 [2024-11-20 15:58:32.096923] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.790 15:58:33 -- accel/accel.sh@18 -- # out=' 00:08:02.790 SPDK Configuration: 00:08:02.790 Core mask: 0x1 00:08:02.790 00:08:02.790 Accel Perf Configuration: 00:08:02.790 Workload Type: dif_generate 00:08:02.791 Vector size: 4096 bytes 00:08:02.791 Transfer size: 4096 bytes 00:08:02.791 Block size: 512 bytes 00:08:02.791 Metadata size: 8 bytes 00:08:02.791 Vector count 1 00:08:02.791 Module: software 00:08:02.791 Queue depth: 32 00:08:02.791 Allocate depth: 32 00:08:02.791 # threads/core: 1 00:08:02.791 Run time: 1 seconds 00:08:02.791 Verify: No 00:08:02.791 00:08:02.791 Running for 1 seconds... 00:08:02.791 00:08:02.791 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:02.791 ------------------------------------------------------------------------------------ 00:08:02.791 0,0 169920/s 674 MiB/s 0 0 00:08:02.791 ==================================================================================== 00:08:02.791 Total 169920/s 663 MiB/s 0 0' 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:08:02.791 15:58:33 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:08:02.791 15:58:33 -- accel/accel.sh@12 -- # build_accel_config 00:08:02.791 15:58:33 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:02.791 15:58:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:02.791 15:58:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:02.791 15:58:33 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:02.791 15:58:33 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:02.791 15:58:33 -- accel/accel.sh@41 -- # local IFS=, 00:08:02.791 15:58:33 -- accel/accel.sh@42 -- # jq -r . 00:08:02.791 [2024-11-20 15:58:33.287364] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:02.791 [2024-11-20 15:58:33.287430] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1200830 ] 00:08:02.791 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.791 [2024-11-20 15:58:33.357848] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.791 [2024-11-20 15:58:33.392216] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val=0x1 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val=dif_generate 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@24 -- # accel_opc=dif_generate 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val='512 bytes' 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val='8 bytes' 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val=software 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@23 -- # accel_module=software 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val=32 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val=32 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val=1 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val=No 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:02.791 15:58:33 -- accel/accel.sh@21 -- # val= 00:08:02.791 15:58:33 -- accel/accel.sh@22 -- # case "$var" in 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # IFS=: 00:08:02.791 15:58:33 -- accel/accel.sh@20 -- # read -r var val 00:08:04.172 15:58:34 -- accel/accel.sh@21 -- # val= 00:08:04.172 15:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # IFS=: 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # read -r var val 00:08:04.172 15:58:34 -- accel/accel.sh@21 -- # val= 00:08:04.172 15:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # IFS=: 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # read -r var val 00:08:04.172 15:58:34 -- accel/accel.sh@21 -- # val= 00:08:04.172 15:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # IFS=: 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # read -r var val 00:08:04.172 15:58:34 -- accel/accel.sh@21 -- # val= 00:08:04.172 15:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # IFS=: 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # read -r var val 00:08:04.172 15:58:34 -- accel/accel.sh@21 -- # val= 00:08:04.172 15:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # IFS=: 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # read -r var val 00:08:04.172 15:58:34 -- accel/accel.sh@21 -- # val= 00:08:04.172 15:58:34 -- accel/accel.sh@22 -- # case "$var" in 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # IFS=: 00:08:04.172 15:58:34 -- accel/accel.sh@20 -- # read -r var val 00:08:04.172 15:58:34 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:04.172 15:58:34 -- accel/accel.sh@28 -- # [[ -n dif_generate ]] 00:08:04.172 15:58:34 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:04.172 00:08:04.172 real 0m2.596s 00:08:04.172 user 0m2.348s 00:08:04.172 sys 0m0.258s 00:08:04.172 15:58:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:04.172 15:58:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.172 ************************************ 00:08:04.172 END TEST accel_dif_generate 00:08:04.173 ************************************ 00:08:04.173 15:58:34 -- accel/accel.sh@105 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:08:04.173 15:58:34 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:08:04.173 15:58:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:04.173 15:58:34 -- common/autotest_common.sh@10 -- # set +x 00:08:04.173 ************************************ 00:08:04.173 START TEST accel_dif_generate_copy 00:08:04.173 ************************************ 00:08:04.173 15:58:34 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w dif_generate_copy 00:08:04.173 15:58:34 -- accel/accel.sh@16 -- # local accel_opc 00:08:04.173 15:58:34 -- accel/accel.sh@17 -- # local accel_module 00:08:04.173 15:58:34 -- accel/accel.sh@18 -- # accel_perf -t 1 -w dif_generate_copy 00:08:04.173 15:58:34 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:04.173 15:58:34 -- accel/accel.sh@12 -- # build_accel_config 00:08:04.173 15:58:34 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:04.173 15:58:34 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:04.173 15:58:34 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:04.173 15:58:34 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:04.173 15:58:34 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:04.173 15:58:34 -- accel/accel.sh@41 -- # local IFS=, 00:08:04.173 15:58:34 -- accel/accel.sh@42 -- # jq -r . 00:08:04.173 [2024-11-20 15:58:34.633257] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:04.173 [2024-11-20 15:58:34.633324] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201112 ] 00:08:04.173 EAL: No free 2048 kB hugepages reported on node 1 00:08:04.173 [2024-11-20 15:58:34.702839] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.173 [2024-11-20 15:58:34.738017] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.111 15:58:35 -- accel/accel.sh@18 -- # out=' 00:08:05.111 SPDK Configuration: 00:08:05.111 Core mask: 0x1 00:08:05.111 00:08:05.111 Accel Perf Configuration: 00:08:05.111 Workload Type: dif_generate_copy 00:08:05.111 Vector size: 4096 bytes 00:08:05.111 Transfer size: 4096 bytes 00:08:05.111 Vector count 1 00:08:05.111 Module: software 00:08:05.111 Queue depth: 32 00:08:05.111 Allocate depth: 32 00:08:05.111 # threads/core: 1 00:08:05.111 Run time: 1 seconds 00:08:05.111 Verify: No 00:08:05.111 00:08:05.111 Running for 1 seconds... 00:08:05.111 00:08:05.111 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:05.111 ------------------------------------------------------------------------------------ 00:08:05.111 0,0 128544/s 509 MiB/s 0 0 00:08:05.111 ==================================================================================== 00:08:05.111 Total 128544/s 502 MiB/s 0 0' 00:08:05.111 15:58:35 -- accel/accel.sh@20 -- # IFS=: 00:08:05.111 15:58:35 -- accel/accel.sh@20 -- # read -r var val 00:08:05.111 15:58:35 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:08:05.111 15:58:35 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:08:05.111 15:58:35 -- accel/accel.sh@12 -- # build_accel_config 00:08:05.111 15:58:35 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:05.111 15:58:35 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:05.111 15:58:35 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:05.111 15:58:35 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:05.111 15:58:35 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:05.111 15:58:35 -- accel/accel.sh@41 -- # local IFS=, 00:08:05.111 15:58:35 -- accel/accel.sh@42 -- # jq -r . 00:08:05.371 [2024-11-20 15:58:35.927531] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:05.371 [2024-11-20 15:58:35.927600] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201386 ] 00:08:05.371 EAL: No free 2048 kB hugepages reported on node 1 00:08:05.371 [2024-11-20 15:58:35.995168] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.371 [2024-11-20 15:58:36.029489] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val= 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val= 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val=0x1 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val= 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val= 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val=dif_generate_copy 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@24 -- # accel_opc=dif_generate_copy 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val= 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val=software 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@23 -- # accel_module=software 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val=32 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val=32 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val=1 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val=No 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val= 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:05.371 15:58:36 -- accel/accel.sh@21 -- # val= 00:08:05.371 15:58:36 -- accel/accel.sh@22 -- # case "$var" in 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # IFS=: 00:08:05.371 15:58:36 -- accel/accel.sh@20 -- # read -r var val 00:08:06.749 15:58:37 -- accel/accel.sh@21 -- # val= 00:08:06.749 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.749 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:08:06.749 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:08:06.750 15:58:37 -- accel/accel.sh@21 -- # val= 00:08:06.750 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:08:06.750 15:58:37 -- accel/accel.sh@21 -- # val= 00:08:06.750 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:08:06.750 15:58:37 -- accel/accel.sh@21 -- # val= 00:08:06.750 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:08:06.750 15:58:37 -- accel/accel.sh@21 -- # val= 00:08:06.750 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:08:06.750 15:58:37 -- accel/accel.sh@21 -- # val= 00:08:06.750 15:58:37 -- accel/accel.sh@22 -- # case "$var" in 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # IFS=: 00:08:06.750 15:58:37 -- accel/accel.sh@20 -- # read -r var val 00:08:06.750 15:58:37 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:06.750 15:58:37 -- accel/accel.sh@28 -- # [[ -n dif_generate_copy ]] 00:08:06.750 15:58:37 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:06.750 00:08:06.750 real 0m2.592s 00:08:06.750 user 0m2.351s 00:08:06.750 sys 0m0.249s 00:08:06.750 15:58:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:06.750 15:58:37 -- common/autotest_common.sh@10 -- # set +x 00:08:06.750 ************************************ 00:08:06.750 END TEST accel_dif_generate_copy 00:08:06.750 ************************************ 00:08:06.750 15:58:37 -- accel/accel.sh@107 -- # [[ y == y ]] 00:08:06.750 15:58:37 -- accel/accel.sh@108 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.750 15:58:37 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:08:06.750 15:58:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:06.750 15:58:37 -- common/autotest_common.sh@10 -- # set +x 00:08:06.750 ************************************ 00:08:06.750 START TEST accel_comp 00:08:06.750 ************************************ 00:08:06.750 15:58:37 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.750 15:58:37 -- accel/accel.sh@16 -- # local accel_opc 00:08:06.750 15:58:37 -- accel/accel.sh@17 -- # local accel_module 00:08:06.750 15:58:37 -- accel/accel.sh@18 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.750 15:58:37 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:06.750 15:58:37 -- accel/accel.sh@12 -- # build_accel_config 00:08:06.750 15:58:37 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:06.750 15:58:37 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:06.750 15:58:37 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:06.750 15:58:37 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:06.750 15:58:37 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:06.750 15:58:37 -- accel/accel.sh@41 -- # local IFS=, 00:08:06.750 15:58:37 -- accel/accel.sh@42 -- # jq -r . 00:08:06.750 [2024-11-20 15:58:37.275186] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:06.750 [2024-11-20 15:58:37.275267] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201667 ] 00:08:06.750 EAL: No free 2048 kB hugepages reported on node 1 00:08:06.750 [2024-11-20 15:58:37.346202] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.750 [2024-11-20 15:58:37.381210] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.128 15:58:38 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:08.128 00:08:08.128 SPDK Configuration: 00:08:08.128 Core mask: 0x1 00:08:08.128 00:08:08.128 Accel Perf Configuration: 00:08:08.128 Workload Type: compress 00:08:08.128 Transfer size: 4096 bytes 00:08:08.128 Vector count 1 00:08:08.128 Module: software 00:08:08.128 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:08.129 Queue depth: 32 00:08:08.129 Allocate depth: 32 00:08:08.129 # threads/core: 1 00:08:08.129 Run time: 1 seconds 00:08:08.129 Verify: No 00:08:08.129 00:08:08.129 Running for 1 seconds... 00:08:08.129 00:08:08.129 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:08.129 ------------------------------------------------------------------------------------ 00:08:08.129 0,0 63552/s 264 MiB/s 0 0 00:08:08.129 ==================================================================================== 00:08:08.129 Total 63552/s 248 MiB/s 0 0' 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:08.129 15:58:38 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:08.129 15:58:38 -- accel/accel.sh@12 -- # build_accel_config 00:08:08.129 15:58:38 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:08.129 15:58:38 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:08.129 15:58:38 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:08.129 15:58:38 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:08.129 15:58:38 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:08.129 15:58:38 -- accel/accel.sh@41 -- # local IFS=, 00:08:08.129 15:58:38 -- accel/accel.sh@42 -- # jq -r . 00:08:08.129 [2024-11-20 15:58:38.575374] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:08.129 [2024-11-20 15:58:38.575444] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1201935 ] 00:08:08.129 EAL: No free 2048 kB hugepages reported on node 1 00:08:08.129 [2024-11-20 15:58:38.644004] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.129 [2024-11-20 15:58:38.677991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val= 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val= 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val= 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val=0x1 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val= 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val= 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val=compress 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@24 -- # accel_opc=compress 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val= 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val=software 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@23 -- # accel_module=software 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val=32 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val=32 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val=1 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val=No 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val= 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:08.129 15:58:38 -- accel/accel.sh@21 -- # val= 00:08:08.129 15:58:38 -- accel/accel.sh@22 -- # case "$var" in 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # IFS=: 00:08:08.129 15:58:38 -- accel/accel.sh@20 -- # read -r var val 00:08:09.066 15:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.066 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.066 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.066 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.066 15:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.066 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.066 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.066 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.066 15:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.067 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.067 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.067 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.067 15:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.067 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.067 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.067 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.067 15:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.067 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.067 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.067 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.067 15:58:39 -- accel/accel.sh@21 -- # val= 00:08:09.067 15:58:39 -- accel/accel.sh@22 -- # case "$var" in 00:08:09.067 15:58:39 -- accel/accel.sh@20 -- # IFS=: 00:08:09.067 15:58:39 -- accel/accel.sh@20 -- # read -r var val 00:08:09.067 15:58:39 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:09.067 15:58:39 -- accel/accel.sh@28 -- # [[ -n compress ]] 00:08:09.067 15:58:39 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:09.067 00:08:09.067 real 0m2.604s 00:08:09.067 user 0m2.350s 00:08:09.067 sys 0m0.265s 00:08:09.067 15:58:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:09.067 15:58:39 -- common/autotest_common.sh@10 -- # set +x 00:08:09.067 ************************************ 00:08:09.067 END TEST accel_comp 00:08:09.067 ************************************ 00:08:09.326 15:58:39 -- accel/accel.sh@109 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:09.326 15:58:39 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:08:09.326 15:58:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:09.326 15:58:39 -- common/autotest_common.sh@10 -- # set +x 00:08:09.326 ************************************ 00:08:09.326 START TEST accel_decomp 00:08:09.326 ************************************ 00:08:09.326 15:58:39 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:09.326 15:58:39 -- accel/accel.sh@16 -- # local accel_opc 00:08:09.326 15:58:39 -- accel/accel.sh@17 -- # local accel_module 00:08:09.326 15:58:39 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:09.327 15:58:39 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:09.327 15:58:39 -- accel/accel.sh@12 -- # build_accel_config 00:08:09.327 15:58:39 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:09.327 15:58:39 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:09.327 15:58:39 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:09.327 15:58:39 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:09.327 15:58:39 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:09.327 15:58:39 -- accel/accel.sh@41 -- # local IFS=, 00:08:09.327 15:58:39 -- accel/accel.sh@42 -- # jq -r . 00:08:09.327 [2024-11-20 15:58:39.920166] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:09.327 [2024-11-20 15:58:39.920230] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202201 ] 00:08:09.327 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.327 [2024-11-20 15:58:39.988867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:09.327 [2024-11-20 15:58:40.025230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.706 15:58:41 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:10.706 00:08:10.706 SPDK Configuration: 00:08:10.706 Core mask: 0x1 00:08:10.706 00:08:10.706 Accel Perf Configuration: 00:08:10.706 Workload Type: decompress 00:08:10.706 Transfer size: 4096 bytes 00:08:10.706 Vector count 1 00:08:10.706 Module: software 00:08:10.706 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:10.706 Queue depth: 32 00:08:10.706 Allocate depth: 32 00:08:10.706 # threads/core: 1 00:08:10.706 Run time: 1 seconds 00:08:10.706 Verify: Yes 00:08:10.706 00:08:10.706 Running for 1 seconds... 00:08:10.706 00:08:10.706 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:10.706 ------------------------------------------------------------------------------------ 00:08:10.706 0,0 85344/s 157 MiB/s 0 0 00:08:10.706 ==================================================================================== 00:08:10.706 Total 85344/s 333 MiB/s 0 0' 00:08:10.706 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.706 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.706 15:58:41 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:10.706 15:58:41 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y 00:08:10.706 15:58:41 -- accel/accel.sh@12 -- # build_accel_config 00:08:10.706 15:58:41 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:10.706 15:58:41 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:10.706 15:58:41 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:10.706 15:58:41 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:10.706 15:58:41 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:10.706 15:58:41 -- accel/accel.sh@41 -- # local IFS=, 00:08:10.706 15:58:41 -- accel/accel.sh@42 -- # jq -r . 00:08:10.706 [2024-11-20 15:58:41.222381] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:10.706 [2024-11-20 15:58:41.222445] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202351 ] 00:08:10.706 EAL: No free 2048 kB hugepages reported on node 1 00:08:10.706 [2024-11-20 15:58:41.293181] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.706 [2024-11-20 15:58:41.328013] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.706 15:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.706 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.706 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.706 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.706 15:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.706 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.706 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.706 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.706 15:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val=0x1 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val=decompress 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val=software 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@23 -- # accel_module=software 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val=32 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val=32 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val=1 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val=Yes 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:10.707 15:58:41 -- accel/accel.sh@21 -- # val= 00:08:10.707 15:58:41 -- accel/accel.sh@22 -- # case "$var" in 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # IFS=: 00:08:10.707 15:58:41 -- accel/accel.sh@20 -- # read -r var val 00:08:12.088 15:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.088 15:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.088 15:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.088 15:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.088 15:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.088 15:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.088 15:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.088 15:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.088 15:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.088 15:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.088 15:58:42 -- accel/accel.sh@21 -- # val= 00:08:12.088 15:58:42 -- accel/accel.sh@22 -- # case "$var" in 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # IFS=: 00:08:12.088 15:58:42 -- accel/accel.sh@20 -- # read -r var val 00:08:12.088 15:58:42 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:12.088 15:58:42 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:12.088 15:58:42 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:12.088 00:08:12.088 real 0m2.608s 00:08:12.088 user 0m2.370s 00:08:12.088 sys 0m0.246s 00:08:12.088 15:58:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:12.088 15:58:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.088 ************************************ 00:08:12.088 END TEST accel_decomp 00:08:12.088 ************************************ 00:08:12.088 15:58:42 -- accel/accel.sh@110 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:12.088 15:58:42 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:12.088 15:58:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:12.088 15:58:42 -- common/autotest_common.sh@10 -- # set +x 00:08:12.088 ************************************ 00:08:12.088 START TEST accel_decmop_full 00:08:12.088 ************************************ 00:08:12.088 15:58:42 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:12.088 15:58:42 -- accel/accel.sh@16 -- # local accel_opc 00:08:12.088 15:58:42 -- accel/accel.sh@17 -- # local accel_module 00:08:12.088 15:58:42 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:12.088 15:58:42 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:12.088 15:58:42 -- accel/accel.sh@12 -- # build_accel_config 00:08:12.088 15:58:42 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:12.088 15:58:42 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:12.088 15:58:42 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:12.088 15:58:42 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:12.088 15:58:42 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:12.088 15:58:42 -- accel/accel.sh@41 -- # local IFS=, 00:08:12.088 15:58:42 -- accel/accel.sh@42 -- # jq -r . 00:08:12.088 [2024-11-20 15:58:42.577539] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:12.088 [2024-11-20 15:58:42.577606] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202546 ] 00:08:12.088 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.088 [2024-11-20 15:58:42.647769] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.088 [2024-11-20 15:58:42.683305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.469 15:58:43 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:13.469 00:08:13.469 SPDK Configuration: 00:08:13.469 Core mask: 0x1 00:08:13.469 00:08:13.469 Accel Perf Configuration: 00:08:13.469 Workload Type: decompress 00:08:13.469 Transfer size: 111250 bytes 00:08:13.469 Vector count 1 00:08:13.469 Module: software 00:08:13.469 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:13.469 Queue depth: 32 00:08:13.469 Allocate depth: 32 00:08:13.469 # threads/core: 1 00:08:13.469 Run time: 1 seconds 00:08:13.469 Verify: Yes 00:08:13.469 00:08:13.469 Running for 1 seconds... 00:08:13.469 00:08:13.469 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:13.469 ------------------------------------------------------------------------------------ 00:08:13.469 0,0 5760/s 237 MiB/s 0 0 00:08:13.469 ==================================================================================== 00:08:13.469 Total 5760/s 611 MiB/s 0 0' 00:08:13.469 15:58:43 -- accel/accel.sh@20 -- # IFS=: 00:08:13.469 15:58:43 -- accel/accel.sh@20 -- # read -r var val 00:08:13.469 15:58:43 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:13.469 15:58:43 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 00:08:13.469 15:58:43 -- accel/accel.sh@12 -- # build_accel_config 00:08:13.469 15:58:43 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:13.469 15:58:43 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:13.469 15:58:43 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:13.469 15:58:43 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:13.469 15:58:43 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:13.470 15:58:43 -- accel/accel.sh@41 -- # local IFS=, 00:08:13.470 15:58:43 -- accel/accel.sh@42 -- # jq -r . 00:08:13.470 [2024-11-20 15:58:43.887657] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:13.470 [2024-11-20 15:58:43.887747] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1202795 ] 00:08:13.470 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.470 [2024-11-20 15:58:43.957956] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:13.470 [2024-11-20 15:58:43.992340] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val= 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val= 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val= 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val=0x1 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val= 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val= 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val=decompress 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val= 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val=software 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@23 -- # accel_module=software 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val=32 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val=32 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val=1 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val=Yes 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val= 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:13.470 15:58:44 -- accel/accel.sh@21 -- # val= 00:08:13.470 15:58:44 -- accel/accel.sh@22 -- # case "$var" in 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # IFS=: 00:08:13.470 15:58:44 -- accel/accel.sh@20 -- # read -r var val 00:08:14.408 15:58:45 -- accel/accel.sh@21 -- # val= 00:08:14.409 15:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # IFS=: 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # read -r var val 00:08:14.409 15:58:45 -- accel/accel.sh@21 -- # val= 00:08:14.409 15:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # IFS=: 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # read -r var val 00:08:14.409 15:58:45 -- accel/accel.sh@21 -- # val= 00:08:14.409 15:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # IFS=: 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # read -r var val 00:08:14.409 15:58:45 -- accel/accel.sh@21 -- # val= 00:08:14.409 15:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # IFS=: 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # read -r var val 00:08:14.409 15:58:45 -- accel/accel.sh@21 -- # val= 00:08:14.409 15:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # IFS=: 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # read -r var val 00:08:14.409 15:58:45 -- accel/accel.sh@21 -- # val= 00:08:14.409 15:58:45 -- accel/accel.sh@22 -- # case "$var" in 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # IFS=: 00:08:14.409 15:58:45 -- accel/accel.sh@20 -- # read -r var val 00:08:14.409 15:58:45 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:14.409 15:58:45 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:14.409 15:58:45 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:14.409 00:08:14.409 real 0m2.622s 00:08:14.409 user 0m2.367s 00:08:14.409 sys 0m0.263s 00:08:14.409 15:58:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:14.409 15:58:45 -- common/autotest_common.sh@10 -- # set +x 00:08:14.409 ************************************ 00:08:14.409 END TEST accel_decmop_full 00:08:14.409 ************************************ 00:08:14.409 15:58:45 -- accel/accel.sh@111 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:14.668 15:58:45 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:14.668 15:58:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:14.668 15:58:45 -- common/autotest_common.sh@10 -- # set +x 00:08:14.668 ************************************ 00:08:14.668 START TEST accel_decomp_mcore 00:08:14.668 ************************************ 00:08:14.668 15:58:45 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:14.668 15:58:45 -- accel/accel.sh@16 -- # local accel_opc 00:08:14.668 15:58:45 -- accel/accel.sh@17 -- # local accel_module 00:08:14.668 15:58:45 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:14.668 15:58:45 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:14.668 15:58:45 -- accel/accel.sh@12 -- # build_accel_config 00:08:14.669 15:58:45 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:14.669 15:58:45 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:14.669 15:58:45 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:14.669 15:58:45 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:14.669 15:58:45 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:14.669 15:58:45 -- accel/accel.sh@41 -- # local IFS=, 00:08:14.669 15:58:45 -- accel/accel.sh@42 -- # jq -r . 00:08:14.669 [2024-11-20 15:58:45.246632] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:14.669 [2024-11-20 15:58:45.246698] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203084 ] 00:08:14.669 EAL: No free 2048 kB hugepages reported on node 1 00:08:14.669 [2024-11-20 15:58:45.316405] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:14.669 [2024-11-20 15:58:45.354163] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:14.669 [2024-11-20 15:58:45.354260] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:14.669 [2024-11-20 15:58:45.354342] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:14.669 [2024-11-20 15:58:45.354359] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.049 15:58:46 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:16.049 00:08:16.049 SPDK Configuration: 00:08:16.049 Core mask: 0xf 00:08:16.049 00:08:16.049 Accel Perf Configuration: 00:08:16.049 Workload Type: decompress 00:08:16.049 Transfer size: 4096 bytes 00:08:16.049 Vector count 1 00:08:16.049 Module: software 00:08:16.049 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:16.049 Queue depth: 32 00:08:16.049 Allocate depth: 32 00:08:16.049 # threads/core: 1 00:08:16.049 Run time: 1 seconds 00:08:16.049 Verify: Yes 00:08:16.049 00:08:16.049 Running for 1 seconds... 00:08:16.049 00:08:16.049 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:16.049 ------------------------------------------------------------------------------------ 00:08:16.049 0,0 73312/s 135 MiB/s 0 0 00:08:16.049 3,0 73792/s 135 MiB/s 0 0 00:08:16.049 2,0 73664/s 135 MiB/s 0 0 00:08:16.049 1,0 73856/s 136 MiB/s 0 0 00:08:16.049 ==================================================================================== 00:08:16.049 Total 294624/s 1150 MiB/s 0 0' 00:08:16.049 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.049 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.049 15:58:46 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:16.049 15:58:46 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:08:16.049 15:58:46 -- accel/accel.sh@12 -- # build_accel_config 00:08:16.049 15:58:46 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:16.049 15:58:46 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:16.049 15:58:46 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:16.049 15:58:46 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:16.049 15:58:46 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:16.049 15:58:46 -- accel/accel.sh@41 -- # local IFS=, 00:08:16.049 15:58:46 -- accel/accel.sh@42 -- # jq -r . 00:08:16.049 [2024-11-20 15:58:46.554367] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:16.049 [2024-11-20 15:58:46.554442] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203353 ] 00:08:16.049 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.050 [2024-11-20 15:58:46.624780] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:16.050 [2024-11-20 15:58:46.661709] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:16.050 [2024-11-20 15:58:46.661806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:16.050 [2024-11-20 15:58:46.661866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:16.050 [2024-11-20 15:58:46.661868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val= 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val= 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val= 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val=0xf 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val= 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val= 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val=decompress 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val= 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val=software 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@23 -- # accel_module=software 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val=32 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val=32 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val=1 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val=Yes 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val= 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:16.050 15:58:46 -- accel/accel.sh@21 -- # val= 00:08:16.050 15:58:46 -- accel/accel.sh@22 -- # case "$var" in 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # IFS=: 00:08:16.050 15:58:46 -- accel/accel.sh@20 -- # read -r var val 00:08:17.431 15:58:47 -- accel/accel.sh@21 -- # val= 00:08:17.431 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:08:17.431 15:58:47 -- accel/accel.sh@21 -- # val= 00:08:17.431 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:08:17.431 15:58:47 -- accel/accel.sh@21 -- # val= 00:08:17.431 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:08:17.431 15:58:47 -- accel/accel.sh@21 -- # val= 00:08:17.431 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:08:17.431 15:58:47 -- accel/accel.sh@21 -- # val= 00:08:17.431 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:08:17.431 15:58:47 -- accel/accel.sh@21 -- # val= 00:08:17.431 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:08:17.431 15:58:47 -- accel/accel.sh@21 -- # val= 00:08:17.431 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:08:17.431 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:08:17.431 15:58:47 -- accel/accel.sh@21 -- # val= 00:08:17.432 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.432 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:08:17.432 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:08:17.432 15:58:47 -- accel/accel.sh@21 -- # val= 00:08:17.432 15:58:47 -- accel/accel.sh@22 -- # case "$var" in 00:08:17.432 15:58:47 -- accel/accel.sh@20 -- # IFS=: 00:08:17.432 15:58:47 -- accel/accel.sh@20 -- # read -r var val 00:08:17.432 15:58:47 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:17.432 15:58:47 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:17.432 15:58:47 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:17.432 00:08:17.432 real 0m2.624s 00:08:17.432 user 0m9.023s 00:08:17.432 sys 0m0.269s 00:08:17.432 15:58:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:17.432 15:58:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.432 ************************************ 00:08:17.432 END TEST accel_decomp_mcore 00:08:17.432 ************************************ 00:08:17.432 15:58:47 -- accel/accel.sh@112 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:17.432 15:58:47 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:17.432 15:58:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:17.432 15:58:47 -- common/autotest_common.sh@10 -- # set +x 00:08:17.432 ************************************ 00:08:17.432 START TEST accel_decomp_full_mcore 00:08:17.432 ************************************ 00:08:17.432 15:58:47 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:17.432 15:58:47 -- accel/accel.sh@16 -- # local accel_opc 00:08:17.432 15:58:47 -- accel/accel.sh@17 -- # local accel_module 00:08:17.432 15:58:47 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:17.432 15:58:47 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:17.432 15:58:47 -- accel/accel.sh@12 -- # build_accel_config 00:08:17.432 15:58:47 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:17.432 15:58:47 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:17.432 15:58:47 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:17.432 15:58:47 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:17.432 15:58:47 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:17.432 15:58:47 -- accel/accel.sh@41 -- # local IFS=, 00:08:17.432 15:58:47 -- accel/accel.sh@42 -- # jq -r . 00:08:17.432 [2024-11-20 15:58:47.920210] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:17.432 [2024-11-20 15:58:47.920294] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203639 ] 00:08:17.432 EAL: No free 2048 kB hugepages reported on node 1 00:08:17.432 [2024-11-20 15:58:47.992293] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:17.432 [2024-11-20 15:58:48.030743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:17.432 [2024-11-20 15:58:48.030838] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:17.432 [2024-11-20 15:58:48.030927] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:17.432 [2024-11-20 15:58:48.030929] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.813 15:58:49 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:18.813 00:08:18.813 SPDK Configuration: 00:08:18.813 Core mask: 0xf 00:08:18.813 00:08:18.813 Accel Perf Configuration: 00:08:18.813 Workload Type: decompress 00:08:18.813 Transfer size: 111250 bytes 00:08:18.813 Vector count 1 00:08:18.813 Module: software 00:08:18.813 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:18.813 Queue depth: 32 00:08:18.813 Allocate depth: 32 00:08:18.813 # threads/core: 1 00:08:18.813 Run time: 1 seconds 00:08:18.813 Verify: Yes 00:08:18.813 00:08:18.813 Running for 1 seconds... 00:08:18.813 00:08:18.813 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:18.813 ------------------------------------------------------------------------------------ 00:08:18.813 0,0 5696/s 235 MiB/s 0 0 00:08:18.813 3,0 5696/s 235 MiB/s 0 0 00:08:18.813 2,0 5696/s 235 MiB/s 0 0 00:08:18.813 1,0 5696/s 235 MiB/s 0 0 00:08:18.813 ==================================================================================== 00:08:18.813 Total 22784/s 2417 MiB/s 0 0' 00:08:18.813 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.813 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.813 15:58:49 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:18.813 15:58:49 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:08:18.813 15:58:49 -- accel/accel.sh@12 -- # build_accel_config 00:08:18.813 15:58:49 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:18.814 15:58:49 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:18.814 15:58:49 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:18.814 15:58:49 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:18.814 15:58:49 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:18.814 15:58:49 -- accel/accel.sh@41 -- # local IFS=, 00:08:18.814 15:58:49 -- accel/accel.sh@42 -- # jq -r . 00:08:18.814 [2024-11-20 15:58:49.240436] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:18.814 [2024-11-20 15:58:49.240501] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1203906 ] 00:08:18.814 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.814 [2024-11-20 15:58:49.310540] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.814 [2024-11-20 15:58:49.347832] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.814 [2024-11-20 15:58:49.347924] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.814 [2024-11-20 15:58:49.347989] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.814 [2024-11-20 15:58:49.347991] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val= 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val= 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val= 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val=0xf 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val= 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val= 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val=decompress 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val= 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val=software 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@23 -- # accel_module=software 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val=32 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val=32 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val=1 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val=Yes 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val= 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:18.814 15:58:49 -- accel/accel.sh@21 -- # val= 00:08:18.814 15:58:49 -- accel/accel.sh@22 -- # case "$var" in 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # IFS=: 00:08:18.814 15:58:49 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@21 -- # val= 00:08:19.753 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@21 -- # val= 00:08:19.753 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@21 -- # val= 00:08:19.753 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@21 -- # val= 00:08:19.753 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@21 -- # val= 00:08:19.753 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@21 -- # val= 00:08:19.753 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@21 -- # val= 00:08:19.753 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@21 -- # val= 00:08:19.753 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@21 -- # val= 00:08:19.753 15:58:50 -- accel/accel.sh@22 -- # case "$var" in 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # IFS=: 00:08:19.753 15:58:50 -- accel/accel.sh@20 -- # read -r var val 00:08:19.753 15:58:50 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:19.753 15:58:50 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:19.753 15:58:50 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:19.753 00:08:19.753 real 0m2.649s 00:08:19.753 user 0m9.078s 00:08:19.753 sys 0m0.286s 00:08:19.753 15:58:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:19.753 15:58:50 -- common/autotest_common.sh@10 -- # set +x 00:08:19.753 ************************************ 00:08:19.753 END TEST accel_decomp_full_mcore 00:08:19.753 ************************************ 00:08:20.013 15:58:50 -- accel/accel.sh@113 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:20.013 15:58:50 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:08:20.013 15:58:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:20.013 15:58:50 -- common/autotest_common.sh@10 -- # set +x 00:08:20.013 ************************************ 00:08:20.013 START TEST accel_decomp_mthread 00:08:20.013 ************************************ 00:08:20.013 15:58:50 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:20.013 15:58:50 -- accel/accel.sh@16 -- # local accel_opc 00:08:20.013 15:58:50 -- accel/accel.sh@17 -- # local accel_module 00:08:20.013 15:58:50 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:20.013 15:58:50 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:20.013 15:58:50 -- accel/accel.sh@12 -- # build_accel_config 00:08:20.013 15:58:50 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:20.013 15:58:50 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:20.013 15:58:50 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:20.013 15:58:50 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:20.013 15:58:50 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:20.013 15:58:50 -- accel/accel.sh@41 -- # local IFS=, 00:08:20.013 15:58:50 -- accel/accel.sh@42 -- # jq -r . 00:08:20.013 [2024-11-20 15:58:50.617108] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:20.013 [2024-11-20 15:58:50.617175] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204110 ] 00:08:20.013 EAL: No free 2048 kB hugepages reported on node 1 00:08:20.013 [2024-11-20 15:58:50.687360] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.013 [2024-11-20 15:58:50.723264] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.393 15:58:51 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:21.394 00:08:21.394 SPDK Configuration: 00:08:21.394 Core mask: 0x1 00:08:21.394 00:08:21.394 Accel Perf Configuration: 00:08:21.394 Workload Type: decompress 00:08:21.394 Transfer size: 4096 bytes 00:08:21.394 Vector count 1 00:08:21.394 Module: software 00:08:21.394 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:21.394 Queue depth: 32 00:08:21.394 Allocate depth: 32 00:08:21.394 # threads/core: 2 00:08:21.394 Run time: 1 seconds 00:08:21.394 Verify: Yes 00:08:21.394 00:08:21.394 Running for 1 seconds... 00:08:21.394 00:08:21.394 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:21.394 ------------------------------------------------------------------------------------ 00:08:21.394 0,1 43808/s 80 MiB/s 0 0 00:08:21.394 0,0 43712/s 80 MiB/s 0 0 00:08:21.394 ==================================================================================== 00:08:21.394 Total 87520/s 341 MiB/s 0 0' 00:08:21.394 15:58:51 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:51 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:51 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:21.394 15:58:51 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -T 2 00:08:21.394 15:58:51 -- accel/accel.sh@12 -- # build_accel_config 00:08:21.394 15:58:51 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:21.394 15:58:51 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:21.394 15:58:51 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:21.394 15:58:51 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:21.394 15:58:51 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:21.394 15:58:51 -- accel/accel.sh@41 -- # local IFS=, 00:08:21.394 15:58:51 -- accel/accel.sh@42 -- # jq -r . 00:08:21.394 [2024-11-20 15:58:51.921384] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:21.394 [2024-11-20 15:58:51.921448] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204259 ] 00:08:21.394 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.394 [2024-11-20 15:58:51.990941] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.394 [2024-11-20 15:58:52.025648] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val= 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val= 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val= 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val=0x1 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val= 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val= 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val=decompress 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val='4096 bytes' 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val= 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val=software 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@23 -- # accel_module=software 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val=32 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val=32 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val=2 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val=Yes 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val= 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:21.394 15:58:52 -- accel/accel.sh@21 -- # val= 00:08:21.394 15:58:52 -- accel/accel.sh@22 -- # case "$var" in 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # IFS=: 00:08:21.394 15:58:52 -- accel/accel.sh@20 -- # read -r var val 00:08:22.775 15:58:53 -- accel/accel.sh@21 -- # val= 00:08:22.775 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:08:22.775 15:58:53 -- accel/accel.sh@21 -- # val= 00:08:22.775 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:08:22.775 15:58:53 -- accel/accel.sh@21 -- # val= 00:08:22.775 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:08:22.775 15:58:53 -- accel/accel.sh@21 -- # val= 00:08:22.775 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:08:22.775 15:58:53 -- accel/accel.sh@21 -- # val= 00:08:22.775 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:08:22.775 15:58:53 -- accel/accel.sh@21 -- # val= 00:08:22.775 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:08:22.775 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:08:22.775 15:58:53 -- accel/accel.sh@21 -- # val= 00:08:22.775 15:58:53 -- accel/accel.sh@22 -- # case "$var" in 00:08:22.776 15:58:53 -- accel/accel.sh@20 -- # IFS=: 00:08:22.776 15:58:53 -- accel/accel.sh@20 -- # read -r var val 00:08:22.776 15:58:53 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:22.776 15:58:53 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:22.776 15:58:53 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:22.776 00:08:22.776 real 0m2.613s 00:08:22.776 user 0m2.373s 00:08:22.776 sys 0m0.248s 00:08:22.776 15:58:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:22.776 15:58:53 -- common/autotest_common.sh@10 -- # set +x 00:08:22.776 ************************************ 00:08:22.776 END TEST accel_decomp_mthread 00:08:22.776 ************************************ 00:08:22.776 15:58:53 -- accel/accel.sh@114 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:22.776 15:58:53 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:08:22.776 15:58:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:22.776 15:58:53 -- common/autotest_common.sh@10 -- # set +x 00:08:22.776 ************************************ 00:08:22.776 START TEST accel_deomp_full_mthread 00:08:22.776 ************************************ 00:08:22.776 15:58:53 -- common/autotest_common.sh@1114 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:22.776 15:58:53 -- accel/accel.sh@16 -- # local accel_opc 00:08:22.776 15:58:53 -- accel/accel.sh@17 -- # local accel_module 00:08:22.776 15:58:53 -- accel/accel.sh@18 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:22.776 15:58:53 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:22.776 15:58:53 -- accel/accel.sh@12 -- # build_accel_config 00:08:22.776 15:58:53 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:22.776 15:58:53 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:22.776 15:58:53 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:22.776 15:58:53 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:22.776 15:58:53 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:22.776 15:58:53 -- accel/accel.sh@41 -- # local IFS=, 00:08:22.776 15:58:53 -- accel/accel.sh@42 -- # jq -r . 00:08:22.776 [2024-11-20 15:58:53.279306] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:22.776 [2024-11-20 15:58:53.279383] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204505 ] 00:08:22.776 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.776 [2024-11-20 15:58:53.352247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.776 [2024-11-20 15:58:53.387933] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.156 15:58:54 -- accel/accel.sh@18 -- # out='Preparing input file... 00:08:24.156 00:08:24.156 SPDK Configuration: 00:08:24.156 Core mask: 0x1 00:08:24.156 00:08:24.156 Accel Perf Configuration: 00:08:24.156 Workload Type: decompress 00:08:24.156 Transfer size: 111250 bytes 00:08:24.156 Vector count 1 00:08:24.156 Module: software 00:08:24.156 File Name: /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:24.156 Queue depth: 32 00:08:24.156 Allocate depth: 32 00:08:24.156 # threads/core: 2 00:08:24.156 Run time: 1 seconds 00:08:24.156 Verify: Yes 00:08:24.156 00:08:24.156 Running for 1 seconds... 00:08:24.156 00:08:24.156 Core,Thread Transfers Bandwidth Failed Miscompares 00:08:24.156 ------------------------------------------------------------------------------------ 00:08:24.156 0,1 2912/s 120 MiB/s 0 0 00:08:24.156 0,0 2912/s 120 MiB/s 0 0 00:08:24.156 ==================================================================================== 00:08:24.156 Total 5824/s 617 MiB/s 0 0' 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:24.156 15:58:54 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:08:24.156 15:58:54 -- accel/accel.sh@12 -- # build_accel_config 00:08:24.156 15:58:54 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:24.156 15:58:54 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:24.156 15:58:54 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:24.156 15:58:54 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:24.156 15:58:54 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:24.156 15:58:54 -- accel/accel.sh@41 -- # local IFS=, 00:08:24.156 15:58:54 -- accel/accel.sh@42 -- # jq -r . 00:08:24.156 [2024-11-20 15:58:54.603131] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:24.156 [2024-11-20 15:58:54.603197] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1204779 ] 00:08:24.156 EAL: No free 2048 kB hugepages reported on node 1 00:08:24.156 [2024-11-20 15:58:54.672062] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.156 [2024-11-20 15:58:54.706230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val= 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val= 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val= 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val=0x1 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val= 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val= 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val=decompress 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@24 -- # accel_opc=decompress 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val='111250 bytes' 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val= 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val=software 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@23 -- # accel_module=software 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/bib 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val=32 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val=32 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val=2 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val='1 seconds' 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val=Yes 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val= 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:24.156 15:58:54 -- accel/accel.sh@21 -- # val= 00:08:24.156 15:58:54 -- accel/accel.sh@22 -- # case "$var" in 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # IFS=: 00:08:24.156 15:58:54 -- accel/accel.sh@20 -- # read -r var val 00:08:25.096 15:58:55 -- accel/accel.sh@21 -- # val= 00:08:25.096 15:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # IFS=: 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # read -r var val 00:08:25.096 15:58:55 -- accel/accel.sh@21 -- # val= 00:08:25.096 15:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # IFS=: 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # read -r var val 00:08:25.096 15:58:55 -- accel/accel.sh@21 -- # val= 00:08:25.096 15:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # IFS=: 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # read -r var val 00:08:25.096 15:58:55 -- accel/accel.sh@21 -- # val= 00:08:25.096 15:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # IFS=: 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # read -r var val 00:08:25.096 15:58:55 -- accel/accel.sh@21 -- # val= 00:08:25.096 15:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # IFS=: 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # read -r var val 00:08:25.096 15:58:55 -- accel/accel.sh@21 -- # val= 00:08:25.096 15:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # IFS=: 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # read -r var val 00:08:25.096 15:58:55 -- accel/accel.sh@21 -- # val= 00:08:25.096 15:58:55 -- accel/accel.sh@22 -- # case "$var" in 00:08:25.096 15:58:55 -- accel/accel.sh@20 -- # IFS=: 00:08:25.355 15:58:55 -- accel/accel.sh@20 -- # read -r var val 00:08:25.355 15:58:55 -- accel/accel.sh@28 -- # [[ -n software ]] 00:08:25.355 15:58:55 -- accel/accel.sh@28 -- # [[ -n decompress ]] 00:08:25.355 15:58:55 -- accel/accel.sh@28 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:08:25.355 00:08:25.355 real 0m2.651s 00:08:25.355 user 0m2.393s 00:08:25.355 sys 0m0.265s 00:08:25.355 15:58:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.355 15:58:55 -- common/autotest_common.sh@10 -- # set +x 00:08:25.355 ************************************ 00:08:25.355 END TEST accel_deomp_full_mthread 00:08:25.355 ************************************ 00:08:25.355 15:58:55 -- accel/accel.sh@116 -- # [[ n == y ]] 00:08:25.355 15:58:55 -- accel/accel.sh@129 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:25.355 15:58:55 -- accel/accel.sh@129 -- # build_accel_config 00:08:25.355 15:58:55 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:08:25.355 15:58:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.355 15:58:55 -- accel/accel.sh@32 -- # accel_json_cfg=() 00:08:25.355 15:58:55 -- common/autotest_common.sh@10 -- # set +x 00:08:25.355 15:58:55 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:08:25.355 15:58:55 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:08:25.355 15:58:55 -- accel/accel.sh@35 -- # [[ 0 -gt 0 ]] 00:08:25.355 15:58:55 -- accel/accel.sh@37 -- # [[ -n '' ]] 00:08:25.355 15:58:55 -- accel/accel.sh@41 -- # local IFS=, 00:08:25.355 15:58:55 -- accel/accel.sh@42 -- # jq -r . 00:08:25.355 ************************************ 00:08:25.355 START TEST accel_dif_functional_tests 00:08:25.355 ************************************ 00:08:25.355 15:58:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:08:25.355 [2024-11-20 15:58:55.994456] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:25.355 [2024-11-20 15:58:55.994511] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205061 ] 00:08:25.355 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.355 [2024-11-20 15:58:56.063037] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:25.355 [2024-11-20 15:58:56.098153] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.355 [2024-11-20 15:58:56.098248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.355 [2024-11-20 15:58:56.098247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.615 00:08:25.615 00:08:25.615 CUnit - A unit testing framework for C - Version 2.1-3 00:08:25.615 http://cunit.sourceforge.net/ 00:08:25.615 00:08:25.615 00:08:25.615 Suite: accel_dif 00:08:25.615 Test: verify: DIF generated, GUARD check ...passed 00:08:25.615 Test: verify: DIF generated, APPTAG check ...passed 00:08:25.616 Test: verify: DIF generated, REFTAG check ...passed 00:08:25.616 Test: verify: DIF not generated, GUARD check ...[2024-11-20 15:58:56.161572] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:25.616 [2024-11-20 15:58:56.161623] dif.c: 777:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:08:25.616 passed 00:08:25.616 Test: verify: DIF not generated, APPTAG check ...[2024-11-20 15:58:56.161654] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:25.616 [2024-11-20 15:58:56.161670] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:08:25.616 passed 00:08:25.616 Test: verify: DIF not generated, REFTAG check ...[2024-11-20 15:58:56.161688] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:25.616 [2024-11-20 15:58:56.161705] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:08:25.616 passed 00:08:25.616 Test: verify: APPTAG correct, APPTAG check ...passed 00:08:25.616 Test: verify: APPTAG incorrect, APPTAG check ...[2024-11-20 15:58:56.161745] dif.c: 792:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:08:25.616 passed 00:08:25.616 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:08:25.616 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:08:25.616 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:08:25.616 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-11-20 15:58:56.161851] dif.c: 813:_dif_verify: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:08:25.616 passed 00:08:25.616 Test: generate copy: DIF generated, GUARD check ...passed 00:08:25.616 Test: generate copy: DIF generated, APTTAG check ...passed 00:08:25.616 Test: generate copy: DIF generated, REFTAG check ...passed 00:08:25.616 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:08:25.616 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:08:25.616 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:08:25.616 Test: generate copy: iovecs-len validate ...[2024-11-20 15:58:56.162021] dif.c:1167:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:08:25.616 passed 00:08:25.616 Test: generate copy: buffer alignment validate ...passed 00:08:25.616 00:08:25.616 Run Summary: Type Total Ran Passed Failed Inactive 00:08:25.616 suites 1 1 n/a 0 0 00:08:25.616 tests 20 20 20 0 0 00:08:25.616 asserts 204 204 204 0 n/a 00:08:25.616 00:08:25.616 Elapsed time = 0.002 seconds 00:08:25.616 00:08:25.616 real 0m0.366s 00:08:25.616 user 0m0.546s 00:08:25.616 sys 0m0.160s 00:08:25.616 15:58:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.616 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.616 ************************************ 00:08:25.616 END TEST accel_dif_functional_tests 00:08:25.616 ************************************ 00:08:25.616 00:08:25.616 real 0m55.863s 00:08:25.616 user 1m3.498s 00:08:25.616 sys 0m7.114s 00:08:25.616 15:58:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:25.616 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.616 ************************************ 00:08:25.616 END TEST accel 00:08:25.616 ************************************ 00:08:25.616 15:58:56 -- spdk/autotest.sh@177 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:25.616 15:58:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:25.616 15:58:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:25.616 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.616 ************************************ 00:08:25.616 START TEST accel_rpc 00:08:25.616 ************************************ 00:08:25.616 15:58:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel/accel_rpc.sh 00:08:25.875 * Looking for test storage... 00:08:25.875 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/accel 00:08:25.875 15:58:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:25.875 15:58:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:25.875 15:58:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:25.875 15:58:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:25.875 15:58:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:25.875 15:58:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:25.875 15:58:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:25.875 15:58:56 -- scripts/common.sh@335 -- # IFS=.-: 00:08:25.875 15:58:56 -- scripts/common.sh@335 -- # read -ra ver1 00:08:25.875 15:58:56 -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.875 15:58:56 -- scripts/common.sh@336 -- # read -ra ver2 00:08:25.875 15:58:56 -- scripts/common.sh@337 -- # local 'op=<' 00:08:25.875 15:58:56 -- scripts/common.sh@339 -- # ver1_l=2 00:08:25.875 15:58:56 -- scripts/common.sh@340 -- # ver2_l=1 00:08:25.875 15:58:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:25.875 15:58:56 -- scripts/common.sh@343 -- # case "$op" in 00:08:25.875 15:58:56 -- scripts/common.sh@344 -- # : 1 00:08:25.875 15:58:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:25.875 15:58:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.875 15:58:56 -- scripts/common.sh@364 -- # decimal 1 00:08:25.875 15:58:56 -- scripts/common.sh@352 -- # local d=1 00:08:25.875 15:58:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.875 15:58:56 -- scripts/common.sh@354 -- # echo 1 00:08:25.875 15:58:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:25.875 15:58:56 -- scripts/common.sh@365 -- # decimal 2 00:08:25.875 15:58:56 -- scripts/common.sh@352 -- # local d=2 00:08:25.875 15:58:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.875 15:58:56 -- scripts/common.sh@354 -- # echo 2 00:08:25.875 15:58:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:25.875 15:58:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:25.875 15:58:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:25.875 15:58:56 -- scripts/common.sh@367 -- # return 0 00:08:25.875 15:58:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.875 15:58:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:25.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.875 --rc genhtml_branch_coverage=1 00:08:25.875 --rc genhtml_function_coverage=1 00:08:25.875 --rc genhtml_legend=1 00:08:25.875 --rc geninfo_all_blocks=1 00:08:25.875 --rc geninfo_unexecuted_blocks=1 00:08:25.875 00:08:25.875 ' 00:08:25.875 15:58:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:25.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.875 --rc genhtml_branch_coverage=1 00:08:25.876 --rc genhtml_function_coverage=1 00:08:25.876 --rc genhtml_legend=1 00:08:25.876 --rc geninfo_all_blocks=1 00:08:25.876 --rc geninfo_unexecuted_blocks=1 00:08:25.876 00:08:25.876 ' 00:08:25.876 15:58:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:25.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.876 --rc genhtml_branch_coverage=1 00:08:25.876 --rc genhtml_function_coverage=1 00:08:25.876 --rc genhtml_legend=1 00:08:25.876 --rc geninfo_all_blocks=1 00:08:25.876 --rc geninfo_unexecuted_blocks=1 00:08:25.876 00:08:25.876 ' 00:08:25.876 15:58:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:25.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.876 --rc genhtml_branch_coverage=1 00:08:25.876 --rc genhtml_function_coverage=1 00:08:25.876 --rc genhtml_legend=1 00:08:25.876 --rc geninfo_all_blocks=1 00:08:25.876 --rc geninfo_unexecuted_blocks=1 00:08:25.876 00:08:25.876 ' 00:08:25.876 15:58:56 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:25.876 15:58:56 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1205187 00:08:25.876 15:58:56 -- accel/accel_rpc.sh@15 -- # waitforlisten 1205187 00:08:25.876 15:58:56 -- common/autotest_common.sh@829 -- # '[' -z 1205187 ']' 00:08:25.876 15:58:56 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.876 15:58:56 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:25.876 15:58:56 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.876 15:58:56 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:25.876 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:25.876 15:58:56 -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:08:25.876 [2024-11-20 15:58:56.604380] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:25.876 [2024-11-20 15:58:56.604434] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205187 ] 00:08:25.876 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.876 [2024-11-20 15:58:56.674628] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.135 [2024-11-20 15:58:56.711507] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:26.135 [2024-11-20 15:58:56.711629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.135 15:58:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:26.135 15:58:56 -- common/autotest_common.sh@862 -- # return 0 00:08:26.135 15:58:56 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:08:26.135 15:58:56 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:08:26.135 15:58:56 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:08:26.135 15:58:56 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:08:26.135 15:58:56 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:08:26.135 15:58:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.135 15:58:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.135 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.135 ************************************ 00:08:26.135 START TEST accel_assign_opcode 00:08:26.136 ************************************ 00:08:26.136 15:58:56 -- common/autotest_common.sh@1114 -- # accel_assign_opcode_test_suite 00:08:26.136 15:58:56 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:08:26.136 15:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.136 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.136 [2024-11-20 15:58:56.752018] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:08:26.136 15:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.136 15:58:56 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:08:26.136 15:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.136 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.136 [2024-11-20 15:58:56.760025] accel_rpc.c: 168:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:08:26.136 15:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.136 15:58:56 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:08:26.136 15:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.136 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.136 15:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.136 15:58:56 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:08:26.136 15:58:56 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:08:26.136 15:58:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.136 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.136 15:58:56 -- accel/accel_rpc.sh@42 -- # grep software 00:08:26.395 15:58:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.395 software 00:08:26.395 00:08:26.395 real 0m0.226s 00:08:26.395 user 0m0.051s 00:08:26.395 sys 0m0.008s 00:08:26.395 15:58:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.395 15:58:56 -- common/autotest_common.sh@10 -- # set +x 00:08:26.395 ************************************ 00:08:26.395 END TEST accel_assign_opcode 00:08:26.395 ************************************ 00:08:26.395 15:58:57 -- accel/accel_rpc.sh@55 -- # killprocess 1205187 00:08:26.395 15:58:57 -- common/autotest_common.sh@936 -- # '[' -z 1205187 ']' 00:08:26.395 15:58:57 -- common/autotest_common.sh@940 -- # kill -0 1205187 00:08:26.395 15:58:57 -- common/autotest_common.sh@941 -- # uname 00:08:26.395 15:58:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:26.395 15:58:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1205187 00:08:26.395 15:58:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:26.395 15:58:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:26.395 15:58:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1205187' 00:08:26.395 killing process with pid 1205187 00:08:26.395 15:58:57 -- common/autotest_common.sh@955 -- # kill 1205187 00:08:26.395 15:58:57 -- common/autotest_common.sh@960 -- # wait 1205187 00:08:26.655 00:08:26.655 real 0m0.969s 00:08:26.655 user 0m0.873s 00:08:26.655 sys 0m0.442s 00:08:26.655 15:58:57 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:26.655 15:58:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.655 ************************************ 00:08:26.655 END TEST accel_rpc 00:08:26.655 ************************************ 00:08:26.655 15:58:57 -- spdk/autotest.sh@178 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:26.655 15:58:57 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:26.655 15:58:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:26.655 15:58:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.655 ************************************ 00:08:26.655 START TEST app_cmdline 00:08:26.655 ************************************ 00:08:26.655 15:58:57 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/cmdline.sh 00:08:26.915 * Looking for test storage... 00:08:26.915 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:26.915 15:58:57 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:26.915 15:58:57 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:26.915 15:58:57 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:26.915 15:58:57 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:26.915 15:58:57 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:26.915 15:58:57 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:26.915 15:58:57 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:26.915 15:58:57 -- scripts/common.sh@335 -- # IFS=.-: 00:08:26.915 15:58:57 -- scripts/common.sh@335 -- # read -ra ver1 00:08:26.915 15:58:57 -- scripts/common.sh@336 -- # IFS=.-: 00:08:26.915 15:58:57 -- scripts/common.sh@336 -- # read -ra ver2 00:08:26.915 15:58:57 -- scripts/common.sh@337 -- # local 'op=<' 00:08:26.915 15:58:57 -- scripts/common.sh@339 -- # ver1_l=2 00:08:26.915 15:58:57 -- scripts/common.sh@340 -- # ver2_l=1 00:08:26.915 15:58:57 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:26.915 15:58:57 -- scripts/common.sh@343 -- # case "$op" in 00:08:26.915 15:58:57 -- scripts/common.sh@344 -- # : 1 00:08:26.915 15:58:57 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:26.915 15:58:57 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:26.915 15:58:57 -- scripts/common.sh@364 -- # decimal 1 00:08:26.915 15:58:57 -- scripts/common.sh@352 -- # local d=1 00:08:26.915 15:58:57 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:26.915 15:58:57 -- scripts/common.sh@354 -- # echo 1 00:08:26.915 15:58:57 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:26.915 15:58:57 -- scripts/common.sh@365 -- # decimal 2 00:08:26.915 15:58:57 -- scripts/common.sh@352 -- # local d=2 00:08:26.915 15:58:57 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:26.915 15:58:57 -- scripts/common.sh@354 -- # echo 2 00:08:26.915 15:58:57 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:26.915 15:58:57 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:26.915 15:58:57 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:26.915 15:58:57 -- scripts/common.sh@367 -- # return 0 00:08:26.915 15:58:57 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:26.915 15:58:57 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.915 --rc genhtml_branch_coverage=1 00:08:26.915 --rc genhtml_function_coverage=1 00:08:26.915 --rc genhtml_legend=1 00:08:26.915 --rc geninfo_all_blocks=1 00:08:26.915 --rc geninfo_unexecuted_blocks=1 00:08:26.915 00:08:26.915 ' 00:08:26.915 15:58:57 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.915 --rc genhtml_branch_coverage=1 00:08:26.915 --rc genhtml_function_coverage=1 00:08:26.915 --rc genhtml_legend=1 00:08:26.915 --rc geninfo_all_blocks=1 00:08:26.915 --rc geninfo_unexecuted_blocks=1 00:08:26.915 00:08:26.915 ' 00:08:26.915 15:58:57 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.915 --rc genhtml_branch_coverage=1 00:08:26.915 --rc genhtml_function_coverage=1 00:08:26.915 --rc genhtml_legend=1 00:08:26.915 --rc geninfo_all_blocks=1 00:08:26.915 --rc geninfo_unexecuted_blocks=1 00:08:26.915 00:08:26.915 ' 00:08:26.915 15:58:57 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:26.915 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:26.915 --rc genhtml_branch_coverage=1 00:08:26.915 --rc genhtml_function_coverage=1 00:08:26.915 --rc genhtml_legend=1 00:08:26.915 --rc geninfo_all_blocks=1 00:08:26.915 --rc geninfo_unexecuted_blocks=1 00:08:26.915 00:08:26.915 ' 00:08:26.915 15:58:57 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:26.915 15:58:57 -- app/cmdline.sh@17 -- # spdk_tgt_pid=1205479 00:08:26.915 15:58:57 -- app/cmdline.sh@18 -- # waitforlisten 1205479 00:08:26.915 15:58:57 -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:26.915 15:58:57 -- common/autotest_common.sh@829 -- # '[' -z 1205479 ']' 00:08:26.915 15:58:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.915 15:58:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:26.915 15:58:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.915 15:58:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:26.915 15:58:57 -- common/autotest_common.sh@10 -- # set +x 00:08:26.915 [2024-11-20 15:58:57.644002] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:26.915 [2024-11-20 15:58:57.644053] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1205479 ] 00:08:26.915 EAL: No free 2048 kB hugepages reported on node 1 00:08:26.915 [2024-11-20 15:58:57.710115] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.175 [2024-11-20 15:58:57.745777] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:27.175 [2024-11-20 15:58:57.745898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.743 15:58:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:27.743 15:58:58 -- common/autotest_common.sh@862 -- # return 0 00:08:27.743 15:58:58 -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:08:28.003 { 00:08:28.003 "version": "SPDK v24.01.1-pre git sha1 c13c99a5e", 00:08:28.003 "fields": { 00:08:28.003 "major": 24, 00:08:28.003 "minor": 1, 00:08:28.003 "patch": 1, 00:08:28.003 "suffix": "-pre", 00:08:28.003 "commit": "c13c99a5e" 00:08:28.003 } 00:08:28.003 } 00:08:28.003 15:58:58 -- app/cmdline.sh@22 -- # expected_methods=() 00:08:28.003 15:58:58 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:28.003 15:58:58 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:28.003 15:58:58 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:28.003 15:58:58 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:28.003 15:58:58 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:28.003 15:58:58 -- app/cmdline.sh@26 -- # sort 00:08:28.003 15:58:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.003 15:58:58 -- common/autotest_common.sh@10 -- # set +x 00:08:28.003 15:58:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.003 15:58:58 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:28.003 15:58:58 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:28.003 15:58:58 -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:28.003 15:58:58 -- common/autotest_common.sh@650 -- # local es=0 00:08:28.003 15:58:58 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:28.003 15:58:58 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:28.003 15:58:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.003 15:58:58 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:28.003 15:58:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.003 15:58:58 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:28.003 15:58:58 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:28.003 15:58:58 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:08:28.003 15:58:58 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:08:28.003 15:58:58 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:28.003 request: 00:08:28.003 { 00:08:28.003 "method": "env_dpdk_get_mem_stats", 00:08:28.003 "req_id": 1 00:08:28.003 } 00:08:28.003 Got JSON-RPC error response 00:08:28.003 response: 00:08:28.003 { 00:08:28.003 "code": -32601, 00:08:28.003 "message": "Method not found" 00:08:28.003 } 00:08:28.263 15:58:58 -- common/autotest_common.sh@653 -- # es=1 00:08:28.263 15:58:58 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:28.263 15:58:58 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:28.263 15:58:58 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:28.263 15:58:58 -- app/cmdline.sh@1 -- # killprocess 1205479 00:08:28.263 15:58:58 -- common/autotest_common.sh@936 -- # '[' -z 1205479 ']' 00:08:28.263 15:58:58 -- common/autotest_common.sh@940 -- # kill -0 1205479 00:08:28.263 15:58:58 -- common/autotest_common.sh@941 -- # uname 00:08:28.263 15:58:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:28.263 15:58:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1205479 00:08:28.263 15:58:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:08:28.263 15:58:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:08:28.263 15:58:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1205479' 00:08:28.263 killing process with pid 1205479 00:08:28.263 15:58:58 -- common/autotest_common.sh@955 -- # kill 1205479 00:08:28.263 15:58:58 -- common/autotest_common.sh@960 -- # wait 1205479 00:08:28.523 00:08:28.523 real 0m1.744s 00:08:28.523 user 0m2.024s 00:08:28.523 sys 0m0.492s 00:08:28.523 15:58:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:28.523 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 ************************************ 00:08:28.523 END TEST app_cmdline 00:08:28.523 ************************************ 00:08:28.523 15:58:59 -- spdk/autotest.sh@179 -- # run_test version /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:28.523 15:58:59 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:08:28.523 15:58:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.523 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.523 ************************************ 00:08:28.523 START TEST version 00:08:28.523 ************************************ 00:08:28.523 15:58:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/version.sh 00:08:28.523 * Looking for test storage... 00:08:28.523 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:28.523 15:58:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:28.523 15:58:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:28.523 15:58:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:28.782 15:58:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:28.782 15:58:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:28.782 15:58:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:28.782 15:58:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:28.782 15:58:59 -- scripts/common.sh@335 -- # IFS=.-: 00:08:28.782 15:58:59 -- scripts/common.sh@335 -- # read -ra ver1 00:08:28.782 15:58:59 -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.782 15:58:59 -- scripts/common.sh@336 -- # read -ra ver2 00:08:28.782 15:58:59 -- scripts/common.sh@337 -- # local 'op=<' 00:08:28.782 15:58:59 -- scripts/common.sh@339 -- # ver1_l=2 00:08:28.782 15:58:59 -- scripts/common.sh@340 -- # ver2_l=1 00:08:28.782 15:58:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:28.782 15:58:59 -- scripts/common.sh@343 -- # case "$op" in 00:08:28.782 15:58:59 -- scripts/common.sh@344 -- # : 1 00:08:28.782 15:58:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:28.782 15:58:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.782 15:58:59 -- scripts/common.sh@364 -- # decimal 1 00:08:28.782 15:58:59 -- scripts/common.sh@352 -- # local d=1 00:08:28.782 15:58:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.782 15:58:59 -- scripts/common.sh@354 -- # echo 1 00:08:28.782 15:58:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:28.782 15:58:59 -- scripts/common.sh@365 -- # decimal 2 00:08:28.782 15:58:59 -- scripts/common.sh@352 -- # local d=2 00:08:28.782 15:58:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.782 15:58:59 -- scripts/common.sh@354 -- # echo 2 00:08:28.782 15:58:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:28.782 15:58:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:28.782 15:58:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:28.782 15:58:59 -- scripts/common.sh@367 -- # return 0 00:08:28.782 15:58:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.782 15:58:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:28.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.782 --rc genhtml_branch_coverage=1 00:08:28.782 --rc genhtml_function_coverage=1 00:08:28.782 --rc genhtml_legend=1 00:08:28.782 --rc geninfo_all_blocks=1 00:08:28.782 --rc geninfo_unexecuted_blocks=1 00:08:28.782 00:08:28.782 ' 00:08:28.782 15:58:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:28.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.782 --rc genhtml_branch_coverage=1 00:08:28.782 --rc genhtml_function_coverage=1 00:08:28.782 --rc genhtml_legend=1 00:08:28.782 --rc geninfo_all_blocks=1 00:08:28.782 --rc geninfo_unexecuted_blocks=1 00:08:28.782 00:08:28.782 ' 00:08:28.782 15:58:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:28.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.782 --rc genhtml_branch_coverage=1 00:08:28.782 --rc genhtml_function_coverage=1 00:08:28.782 --rc genhtml_legend=1 00:08:28.782 --rc geninfo_all_blocks=1 00:08:28.782 --rc geninfo_unexecuted_blocks=1 00:08:28.782 00:08:28.782 ' 00:08:28.782 15:58:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:28.782 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.782 --rc genhtml_branch_coverage=1 00:08:28.782 --rc genhtml_function_coverage=1 00:08:28.782 --rc genhtml_legend=1 00:08:28.782 --rc geninfo_all_blocks=1 00:08:28.782 --rc geninfo_unexecuted_blocks=1 00:08:28.782 00:08:28.782 ' 00:08:28.782 15:58:59 -- app/version.sh@17 -- # get_header_version major 00:08:28.782 15:58:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:28.782 15:58:59 -- app/version.sh@14 -- # cut -f2 00:08:28.782 15:58:59 -- app/version.sh@14 -- # tr -d '"' 00:08:28.782 15:58:59 -- app/version.sh@17 -- # major=24 00:08:28.782 15:58:59 -- app/version.sh@18 -- # get_header_version minor 00:08:28.782 15:58:59 -- app/version.sh@14 -- # tr -d '"' 00:08:28.782 15:58:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:28.782 15:58:59 -- app/version.sh@14 -- # cut -f2 00:08:28.782 15:58:59 -- app/version.sh@18 -- # minor=1 00:08:28.782 15:58:59 -- app/version.sh@19 -- # get_header_version patch 00:08:28.782 15:58:59 -- app/version.sh@14 -- # tr -d '"' 00:08:28.782 15:58:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:28.782 15:58:59 -- app/version.sh@14 -- # cut -f2 00:08:28.782 15:58:59 -- app/version.sh@19 -- # patch=1 00:08:28.782 15:58:59 -- app/version.sh@20 -- # get_header_version suffix 00:08:28.782 15:58:59 -- app/version.sh@14 -- # cut -f2 00:08:28.782 15:58:59 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/version.h 00:08:28.782 15:58:59 -- app/version.sh@14 -- # tr -d '"' 00:08:28.782 15:58:59 -- app/version.sh@20 -- # suffix=-pre 00:08:28.782 15:58:59 -- app/version.sh@22 -- # version=24.1 00:08:28.782 15:58:59 -- app/version.sh@25 -- # (( patch != 0 )) 00:08:28.782 15:58:59 -- app/version.sh@25 -- # version=24.1.1 00:08:28.782 15:58:59 -- app/version.sh@28 -- # version=24.1.1rc0 00:08:28.782 15:58:59 -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:28.782 15:58:59 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:28.782 15:58:59 -- app/version.sh@30 -- # py_version=24.1.1rc0 00:08:28.782 15:58:59 -- app/version.sh@31 -- # [[ 24.1.1rc0 == \2\4\.\1\.\1\r\c\0 ]] 00:08:28.782 00:08:28.782 real 0m0.267s 00:08:28.782 user 0m0.155s 00:08:28.782 sys 0m0.159s 00:08:28.782 15:58:59 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:28.782 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.782 ************************************ 00:08:28.782 END TEST version 00:08:28.782 ************************************ 00:08:28.782 15:58:59 -- spdk/autotest.sh@181 -- # '[' 0 -eq 1 ']' 00:08:28.782 15:58:59 -- spdk/autotest.sh@191 -- # uname -s 00:08:28.782 15:58:59 -- spdk/autotest.sh@191 -- # [[ Linux == Linux ]] 00:08:28.782 15:58:59 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:28.782 15:58:59 -- spdk/autotest.sh@192 -- # [[ 0 -eq 1 ]] 00:08:28.782 15:58:59 -- spdk/autotest.sh@204 -- # '[' 0 -eq 1 ']' 00:08:28.782 15:58:59 -- spdk/autotest.sh@251 -- # '[' 0 -eq 1 ']' 00:08:28.782 15:58:59 -- spdk/autotest.sh@255 -- # timing_exit lib 00:08:28.782 15:58:59 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:28.782 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:28.782 15:58:59 -- spdk/autotest.sh@257 -- # '[' 0 -eq 1 ']' 00:08:28.782 15:58:59 -- spdk/autotest.sh@265 -- # '[' 0 -eq 1 ']' 00:08:28.782 15:58:59 -- spdk/autotest.sh@274 -- # '[' 1 -eq 1 ']' 00:08:28.782 15:58:59 -- spdk/autotest.sh@275 -- # export NET_TYPE 00:08:28.782 15:58:59 -- spdk/autotest.sh@278 -- # '[' rdma = rdma ']' 00:08:28.782 15:58:59 -- spdk/autotest.sh@279 -- # run_test nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:28.782 15:58:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:28.782 15:58:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:28.782 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.042 ************************************ 00:08:29.042 START TEST nvmf_rdma 00:08:29.042 ************************************ 00:08:29.042 15:58:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=rdma 00:08:29.042 * Looking for test storage... 00:08:29.042 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf 00:08:29.042 15:58:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:29.042 15:58:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:29.042 15:58:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:29.042 15:58:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:29.042 15:58:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:29.042 15:58:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:29.042 15:58:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:29.042 15:58:59 -- scripts/common.sh@335 -- # IFS=.-: 00:08:29.042 15:58:59 -- scripts/common.sh@335 -- # read -ra ver1 00:08:29.042 15:58:59 -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.042 15:58:59 -- scripts/common.sh@336 -- # read -ra ver2 00:08:29.042 15:58:59 -- scripts/common.sh@337 -- # local 'op=<' 00:08:29.042 15:58:59 -- scripts/common.sh@339 -- # ver1_l=2 00:08:29.042 15:58:59 -- scripts/common.sh@340 -- # ver2_l=1 00:08:29.042 15:58:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:29.042 15:58:59 -- scripts/common.sh@343 -- # case "$op" in 00:08:29.042 15:58:59 -- scripts/common.sh@344 -- # : 1 00:08:29.042 15:58:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:29.042 15:58:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.042 15:58:59 -- scripts/common.sh@364 -- # decimal 1 00:08:29.042 15:58:59 -- scripts/common.sh@352 -- # local d=1 00:08:29.042 15:58:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.042 15:58:59 -- scripts/common.sh@354 -- # echo 1 00:08:29.042 15:58:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:29.042 15:58:59 -- scripts/common.sh@365 -- # decimal 2 00:08:29.042 15:58:59 -- scripts/common.sh@352 -- # local d=2 00:08:29.042 15:58:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.042 15:58:59 -- scripts/common.sh@354 -- # echo 2 00:08:29.042 15:58:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:29.042 15:58:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:29.042 15:58:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:29.042 15:58:59 -- scripts/common.sh@367 -- # return 0 00:08:29.042 15:58:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.042 15:58:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:29.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.042 --rc genhtml_branch_coverage=1 00:08:29.042 --rc genhtml_function_coverage=1 00:08:29.042 --rc genhtml_legend=1 00:08:29.042 --rc geninfo_all_blocks=1 00:08:29.042 --rc geninfo_unexecuted_blocks=1 00:08:29.042 00:08:29.042 ' 00:08:29.042 15:58:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:29.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.042 --rc genhtml_branch_coverage=1 00:08:29.042 --rc genhtml_function_coverage=1 00:08:29.042 --rc genhtml_legend=1 00:08:29.042 --rc geninfo_all_blocks=1 00:08:29.042 --rc geninfo_unexecuted_blocks=1 00:08:29.042 00:08:29.042 ' 00:08:29.042 15:58:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:29.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.042 --rc genhtml_branch_coverage=1 00:08:29.042 --rc genhtml_function_coverage=1 00:08:29.042 --rc genhtml_legend=1 00:08:29.042 --rc geninfo_all_blocks=1 00:08:29.042 --rc geninfo_unexecuted_blocks=1 00:08:29.042 00:08:29.042 ' 00:08:29.042 15:58:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:29.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.042 --rc genhtml_branch_coverage=1 00:08:29.042 --rc genhtml_function_coverage=1 00:08:29.043 --rc genhtml_legend=1 00:08:29.043 --rc geninfo_all_blocks=1 00:08:29.043 --rc geninfo_unexecuted_blocks=1 00:08:29.043 00:08:29.043 ' 00:08:29.043 15:58:59 -- nvmf/nvmf.sh@10 -- # uname -s 00:08:29.043 15:58:59 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:08:29.043 15:58:59 -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.043 15:58:59 -- nvmf/common.sh@7 -- # uname -s 00:08:29.043 15:58:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.043 15:58:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.043 15:58:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.043 15:58:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.043 15:58:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.043 15:58:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.043 15:58:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.043 15:58:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.043 15:58:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.043 15:58:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.043 15:58:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:29.043 15:58:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:29.043 15:58:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.043 15:58:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.043 15:58:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.043 15:58:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:29.043 15:58:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.043 15:58:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.043 15:58:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.043 15:58:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.043 15:58:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.043 15:58:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.043 15:58:59 -- paths/export.sh@5 -- # export PATH 00:08:29.043 15:58:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.043 15:58:59 -- nvmf/common.sh@46 -- # : 0 00:08:29.043 15:58:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:29.043 15:58:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:29.043 15:58:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:29.043 15:58:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.043 15:58:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.043 15:58:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:29.043 15:58:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:29.043 15:58:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:29.043 15:58:59 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:08:29.043 15:58:59 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:08:29.043 15:58:59 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:08:29.043 15:58:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.043 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.043 15:58:59 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:08:29.043 15:58:59 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:29.043 15:58:59 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:29.043 15:58:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:29.043 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.043 ************************************ 00:08:29.043 START TEST nvmf_example 00:08:29.043 ************************************ 00:08:29.043 15:58:59 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=rdma 00:08:29.302 * Looking for test storage... 00:08:29.302 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:29.302 15:58:59 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:29.302 15:58:59 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:29.302 15:58:59 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:29.302 15:58:59 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:29.302 15:58:59 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:29.302 15:58:59 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:29.302 15:58:59 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:29.302 15:58:59 -- scripts/common.sh@335 -- # IFS=.-: 00:08:29.302 15:58:59 -- scripts/common.sh@335 -- # read -ra ver1 00:08:29.302 15:58:59 -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.302 15:58:59 -- scripts/common.sh@336 -- # read -ra ver2 00:08:29.302 15:58:59 -- scripts/common.sh@337 -- # local 'op=<' 00:08:29.302 15:58:59 -- scripts/common.sh@339 -- # ver1_l=2 00:08:29.302 15:58:59 -- scripts/common.sh@340 -- # ver2_l=1 00:08:29.302 15:58:59 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:29.302 15:58:59 -- scripts/common.sh@343 -- # case "$op" in 00:08:29.302 15:58:59 -- scripts/common.sh@344 -- # : 1 00:08:29.302 15:58:59 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:29.302 15:58:59 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.302 15:58:59 -- scripts/common.sh@364 -- # decimal 1 00:08:29.302 15:58:59 -- scripts/common.sh@352 -- # local d=1 00:08:29.302 15:58:59 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.302 15:58:59 -- scripts/common.sh@354 -- # echo 1 00:08:29.302 15:58:59 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:29.302 15:58:59 -- scripts/common.sh@365 -- # decimal 2 00:08:29.302 15:58:59 -- scripts/common.sh@352 -- # local d=2 00:08:29.302 15:58:59 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.302 15:58:59 -- scripts/common.sh@354 -- # echo 2 00:08:29.302 15:58:59 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:29.302 15:58:59 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:29.302 15:58:59 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:29.302 15:58:59 -- scripts/common.sh@367 -- # return 0 00:08:29.302 15:58:59 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.302 15:58:59 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:29.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.302 --rc genhtml_branch_coverage=1 00:08:29.302 --rc genhtml_function_coverage=1 00:08:29.302 --rc genhtml_legend=1 00:08:29.302 --rc geninfo_all_blocks=1 00:08:29.302 --rc geninfo_unexecuted_blocks=1 00:08:29.302 00:08:29.302 ' 00:08:29.302 15:58:59 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:29.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.302 --rc genhtml_branch_coverage=1 00:08:29.302 --rc genhtml_function_coverage=1 00:08:29.302 --rc genhtml_legend=1 00:08:29.302 --rc geninfo_all_blocks=1 00:08:29.302 --rc geninfo_unexecuted_blocks=1 00:08:29.302 00:08:29.302 ' 00:08:29.302 15:58:59 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:29.302 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.302 --rc genhtml_branch_coverage=1 00:08:29.302 --rc genhtml_function_coverage=1 00:08:29.302 --rc genhtml_legend=1 00:08:29.303 --rc geninfo_all_blocks=1 00:08:29.303 --rc geninfo_unexecuted_blocks=1 00:08:29.303 00:08:29.303 ' 00:08:29.303 15:58:59 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:29.303 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.303 --rc genhtml_branch_coverage=1 00:08:29.303 --rc genhtml_function_coverage=1 00:08:29.303 --rc genhtml_legend=1 00:08:29.303 --rc geninfo_all_blocks=1 00:08:29.303 --rc geninfo_unexecuted_blocks=1 00:08:29.303 00:08:29.303 ' 00:08:29.303 15:58:59 -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.303 15:58:59 -- nvmf/common.sh@7 -- # uname -s 00:08:29.303 15:58:59 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.303 15:58:59 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.303 15:58:59 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.303 15:58:59 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.303 15:58:59 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:29.303 15:58:59 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:29.303 15:58:59 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.303 15:58:59 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:29.303 15:58:59 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.303 15:58:59 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:29.303 15:58:59 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:29.303 15:58:59 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:29.303 15:58:59 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.303 15:58:59 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:29.303 15:58:59 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:29.303 15:58:59 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:29.303 15:58:59 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.303 15:58:59 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.303 15:58:59 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.303 15:58:59 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.303 15:58:59 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.303 15:58:59 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.303 15:58:59 -- paths/export.sh@5 -- # export PATH 00:08:29.303 15:58:59 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.303 15:58:59 -- nvmf/common.sh@46 -- # : 0 00:08:29.303 15:58:59 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:29.303 15:58:59 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:29.303 15:58:59 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:29.303 15:58:59 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.303 15:58:59 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.303 15:58:59 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:29.303 15:58:59 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:29.303 15:58:59 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:29.303 15:58:59 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:08:29.303 15:58:59 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:08:29.303 15:58:59 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:08:29.303 15:58:59 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:08:29.303 15:58:59 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:08:29.303 15:58:59 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:08:29.303 15:58:59 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:08:29.303 15:58:59 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:08:29.303 15:58:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:29.303 15:58:59 -- common/autotest_common.sh@10 -- # set +x 00:08:29.303 15:59:00 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:08:29.303 15:59:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:29.303 15:59:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.303 15:59:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:29.303 15:59:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:29.303 15:59:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:29.303 15:59:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.303 15:59:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:29.303 15:59:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:29.303 15:59:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:29.303 15:59:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:29.303 15:59:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:29.303 15:59:00 -- common/autotest_common.sh@10 -- # set +x 00:08:35.878 15:59:06 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:35.878 15:59:06 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:35.878 15:59:06 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:35.878 15:59:06 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:35.878 15:59:06 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:35.878 15:59:06 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:35.878 15:59:06 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:35.878 15:59:06 -- nvmf/common.sh@294 -- # net_devs=() 00:08:35.878 15:59:06 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:35.878 15:59:06 -- nvmf/common.sh@295 -- # e810=() 00:08:35.878 15:59:06 -- nvmf/common.sh@295 -- # local -ga e810 00:08:35.878 15:59:06 -- nvmf/common.sh@296 -- # x722=() 00:08:35.878 15:59:06 -- nvmf/common.sh@296 -- # local -ga x722 00:08:35.878 15:59:06 -- nvmf/common.sh@297 -- # mlx=() 00:08:35.878 15:59:06 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:35.878 15:59:06 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.878 15:59:06 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:35.878 15:59:06 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:35.878 15:59:06 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:35.878 15:59:06 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:35.878 15:59:06 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:35.878 15:59:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:35.878 15:59:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:35.878 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:35.878 15:59:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:35.878 15:59:06 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:35.878 15:59:06 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:35.878 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:35.878 15:59:06 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:35.878 15:59:06 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:35.879 15:59:06 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:35.879 15:59:06 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.879 15:59:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:35.879 15:59:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.879 15:59:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:35.879 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:35.879 15:59:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.879 15:59:06 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.879 15:59:06 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:35.879 15:59:06 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.879 15:59:06 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:35.879 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:35.879 15:59:06 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.879 15:59:06 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:35.879 15:59:06 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:35.879 15:59:06 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:35.879 15:59:06 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:35.879 15:59:06 -- nvmf/common.sh@57 -- # uname 00:08:35.879 15:59:06 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:35.879 15:59:06 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:35.879 15:59:06 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:35.879 15:59:06 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:35.879 15:59:06 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:35.879 15:59:06 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:35.879 15:59:06 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:35.879 15:59:06 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:35.879 15:59:06 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:35.879 15:59:06 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:35.879 15:59:06 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:35.879 15:59:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:35.879 15:59:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:35.879 15:59:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:35.879 15:59:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:35.879 15:59:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:35.879 15:59:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:35.879 15:59:06 -- nvmf/common.sh@104 -- # continue 2 00:08:35.879 15:59:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:35.879 15:59:06 -- nvmf/common.sh@104 -- # continue 2 00:08:35.879 15:59:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:35.879 15:59:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:35.879 15:59:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:35.879 15:59:06 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:35.879 15:59:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:35.879 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:35.879 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:35.879 altname enp217s0f0np0 00:08:35.879 altname ens818f0np0 00:08:35.879 inet 192.168.100.8/24 scope global mlx_0_0 00:08:35.879 valid_lft forever preferred_lft forever 00:08:35.879 15:59:06 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:35.879 15:59:06 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:35.879 15:59:06 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:35.879 15:59:06 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:35.879 15:59:06 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:35.879 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:35.879 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:35.879 altname enp217s0f1np1 00:08:35.879 altname ens818f1np1 00:08:35.879 inet 192.168.100.9/24 scope global mlx_0_1 00:08:35.879 valid_lft forever preferred_lft forever 00:08:35.879 15:59:06 -- nvmf/common.sh@410 -- # return 0 00:08:35.879 15:59:06 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:35.879 15:59:06 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:35.879 15:59:06 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:35.879 15:59:06 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:35.879 15:59:06 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:35.879 15:59:06 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:35.879 15:59:06 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:35.879 15:59:06 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:35.879 15:59:06 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:35.879 15:59:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:35.879 15:59:06 -- nvmf/common.sh@104 -- # continue 2 00:08:35.879 15:59:06 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:35.879 15:59:06 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:35.879 15:59:06 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:35.879 15:59:06 -- nvmf/common.sh@104 -- # continue 2 00:08:35.879 15:59:06 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:35.879 15:59:06 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:35.879 15:59:06 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:35.879 15:59:06 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:35.879 15:59:06 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:35.879 15:59:06 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:35.879 15:59:06 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:35.879 15:59:06 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:35.879 192.168.100.9' 00:08:35.879 15:59:06 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:35.879 192.168.100.9' 00:08:35.879 15:59:06 -- nvmf/common.sh@445 -- # head -n 1 00:08:35.879 15:59:06 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:36.139 15:59:06 -- nvmf/common.sh@446 -- # head -n 1 00:08:36.139 15:59:06 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:36.139 192.168.100.9' 00:08:36.139 15:59:06 -- nvmf/common.sh@446 -- # tail -n +2 00:08:36.139 15:59:06 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:36.139 15:59:06 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:36.139 15:59:06 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:36.139 15:59:06 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:36.139 15:59:06 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:36.139 15:59:06 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:36.139 15:59:06 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:08:36.139 15:59:06 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:08:36.139 15:59:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:36.139 15:59:06 -- common/autotest_common.sh@10 -- # set +x 00:08:36.139 15:59:06 -- target/nvmf_example.sh@29 -- # '[' rdma == tcp ']' 00:08:36.139 15:59:06 -- target/nvmf_example.sh@34 -- # nvmfpid=1209327 00:08:36.139 15:59:06 -- target/nvmf_example.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:08:36.139 15:59:06 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:36.139 15:59:06 -- target/nvmf_example.sh@36 -- # waitforlisten 1209327 00:08:36.139 15:59:06 -- common/autotest_common.sh@829 -- # '[' -z 1209327 ']' 00:08:36.139 15:59:06 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.139 15:59:06 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:36.139 15:59:06 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.139 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.139 15:59:06 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:36.139 15:59:06 -- common/autotest_common.sh@10 -- # set +x 00:08:36.139 EAL: No free 2048 kB hugepages reported on node 1 00:08:37.077 15:59:07 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:37.077 15:59:07 -- common/autotest_common.sh@862 -- # return 0 00:08:37.078 15:59:07 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:08:37.078 15:59:07 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:37.078 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.078 15:59:07 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:08:37.078 15:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.078 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.078 15:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.078 15:59:07 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:08:37.078 15:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.078 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.078 15:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.078 15:59:07 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:08:37.078 15:59:07 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:37.078 15:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.078 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.078 15:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.078 15:59:07 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:08:37.078 15:59:07 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:37.078 15:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.078 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.078 15:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.078 15:59:07 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:37.078 15:59:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.078 15:59:07 -- common/autotest_common.sh@10 -- # set +x 00:08:37.078 15:59:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.078 15:59:07 -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:08:37.078 15:59:07 -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:08:37.337 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.696 Initializing NVMe Controllers 00:08:49.696 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:08:49.696 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:49.696 Initialization complete. Launching workers. 00:08:49.696 ======================================================== 00:08:49.696 Latency(us) 00:08:49.696 Device Information : IOPS MiB/s Average min max 00:08:49.696 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 26870.60 104.96 2381.67 579.65 13036.15 00:08:49.696 ======================================================== 00:08:49.696 Total : 26870.60 104.96 2381.67 579.65 13036.15 00:08:49.696 00:08:49.696 15:59:19 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:08:49.696 15:59:19 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:08:49.696 15:59:19 -- nvmf/common.sh@476 -- # nvmfcleanup 00:08:49.696 15:59:19 -- nvmf/common.sh@116 -- # sync 00:08:49.696 15:59:19 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:08:49.696 15:59:19 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:08:49.696 15:59:19 -- nvmf/common.sh@119 -- # set +e 00:08:49.696 15:59:19 -- nvmf/common.sh@120 -- # for i in {1..20} 00:08:49.696 15:59:19 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:08:49.696 rmmod nvme_rdma 00:08:49.696 rmmod nvme_fabrics 00:08:49.696 15:59:19 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:08:49.696 15:59:19 -- nvmf/common.sh@123 -- # set -e 00:08:49.696 15:59:19 -- nvmf/common.sh@124 -- # return 0 00:08:49.696 15:59:19 -- nvmf/common.sh@477 -- # '[' -n 1209327 ']' 00:08:49.696 15:59:19 -- nvmf/common.sh@478 -- # killprocess 1209327 00:08:49.696 15:59:19 -- common/autotest_common.sh@936 -- # '[' -z 1209327 ']' 00:08:49.696 15:59:19 -- common/autotest_common.sh@940 -- # kill -0 1209327 00:08:49.696 15:59:19 -- common/autotest_common.sh@941 -- # uname 00:08:49.696 15:59:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:08:49.696 15:59:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1209327 00:08:49.696 15:59:19 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:08:49.696 15:59:19 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:08:49.696 15:59:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1209327' 00:08:49.696 killing process with pid 1209327 00:08:49.696 15:59:19 -- common/autotest_common.sh@955 -- # kill 1209327 00:08:49.696 15:59:19 -- common/autotest_common.sh@960 -- # wait 1209327 00:08:49.696 nvmf threads initialize successfully 00:08:49.696 bdev subsystem init successfully 00:08:49.696 created a nvmf target service 00:08:49.696 create targets's poll groups done 00:08:49.696 all subsystems of target started 00:08:49.696 nvmf target is running 00:08:49.696 all subsystems of target stopped 00:08:49.696 destroy targets's poll groups done 00:08:49.696 destroyed the nvmf target service 00:08:49.696 bdev subsystem finish successfully 00:08:49.696 nvmf threads destroy successfully 00:08:49.696 15:59:19 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:08:49.696 15:59:19 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:08:49.696 15:59:19 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:08:49.696 15:59:19 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.696 15:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:49.696 00:08:49.696 real 0m19.681s 00:08:49.696 user 0m52.180s 00:08:49.696 sys 0m5.652s 00:08:49.696 15:59:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:08:49.696 15:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:49.696 ************************************ 00:08:49.696 END TEST nvmf_example 00:08:49.696 ************************************ 00:08:49.696 15:59:19 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:49.696 15:59:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:49.696 15:59:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:49.696 15:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:49.696 ************************************ 00:08:49.696 START TEST nvmf_filesystem 00:08:49.696 ************************************ 00:08:49.696 15:59:19 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=rdma 00:08:49.696 * Looking for test storage... 00:08:49.696 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:49.696 15:59:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:49.696 15:59:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:49.696 15:59:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:49.696 15:59:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:49.696 15:59:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:49.696 15:59:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:49.696 15:59:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:49.696 15:59:19 -- scripts/common.sh@335 -- # IFS=.-: 00:08:49.696 15:59:19 -- scripts/common.sh@335 -- # read -ra ver1 00:08:49.696 15:59:19 -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.696 15:59:19 -- scripts/common.sh@336 -- # read -ra ver2 00:08:49.696 15:59:19 -- scripts/common.sh@337 -- # local 'op=<' 00:08:49.696 15:59:19 -- scripts/common.sh@339 -- # ver1_l=2 00:08:49.696 15:59:19 -- scripts/common.sh@340 -- # ver2_l=1 00:08:49.696 15:59:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:49.696 15:59:19 -- scripts/common.sh@343 -- # case "$op" in 00:08:49.696 15:59:19 -- scripts/common.sh@344 -- # : 1 00:08:49.696 15:59:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:49.696 15:59:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.696 15:59:19 -- scripts/common.sh@364 -- # decimal 1 00:08:49.696 15:59:19 -- scripts/common.sh@352 -- # local d=1 00:08:49.696 15:59:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.696 15:59:19 -- scripts/common.sh@354 -- # echo 1 00:08:49.696 15:59:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:49.696 15:59:19 -- scripts/common.sh@365 -- # decimal 2 00:08:49.696 15:59:19 -- scripts/common.sh@352 -- # local d=2 00:08:49.696 15:59:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.696 15:59:19 -- scripts/common.sh@354 -- # echo 2 00:08:49.696 15:59:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:49.696 15:59:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:49.696 15:59:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:49.696 15:59:19 -- scripts/common.sh@367 -- # return 0 00:08:49.696 15:59:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.696 15:59:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:49.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.696 --rc genhtml_branch_coverage=1 00:08:49.696 --rc genhtml_function_coverage=1 00:08:49.696 --rc genhtml_legend=1 00:08:49.696 --rc geninfo_all_blocks=1 00:08:49.696 --rc geninfo_unexecuted_blocks=1 00:08:49.696 00:08:49.696 ' 00:08:49.696 15:59:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:49.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.696 --rc genhtml_branch_coverage=1 00:08:49.696 --rc genhtml_function_coverage=1 00:08:49.696 --rc genhtml_legend=1 00:08:49.696 --rc geninfo_all_blocks=1 00:08:49.696 --rc geninfo_unexecuted_blocks=1 00:08:49.697 00:08:49.697 ' 00:08:49.697 15:59:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:49.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.697 --rc genhtml_branch_coverage=1 00:08:49.697 --rc genhtml_function_coverage=1 00:08:49.697 --rc genhtml_legend=1 00:08:49.697 --rc geninfo_all_blocks=1 00:08:49.697 --rc geninfo_unexecuted_blocks=1 00:08:49.697 00:08:49.697 ' 00:08:49.697 15:59:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:49.697 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.697 --rc genhtml_branch_coverage=1 00:08:49.697 --rc genhtml_function_coverage=1 00:08:49.697 --rc genhtml_legend=1 00:08:49.697 --rc geninfo_all_blocks=1 00:08:49.697 --rc geninfo_unexecuted_blocks=1 00:08:49.697 00:08:49.697 ' 00:08:49.697 15:59:19 -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/autotest_common.sh 00:08:49.697 15:59:19 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:08:49.697 15:59:19 -- common/autotest_common.sh@34 -- # set -e 00:08:49.697 15:59:19 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:08:49.697 15:59:19 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:08:49.697 15:59:19 -- common/autotest_common.sh@38 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh ]] 00:08:49.697 15:59:19 -- common/autotest_common.sh@39 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/build_config.sh 00:08:49.697 15:59:19 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:08:49.697 15:59:19 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:08:49.697 15:59:19 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:08:49.697 15:59:19 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:08:49.697 15:59:19 -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:08:49.697 15:59:19 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:08:49.697 15:59:19 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:08:49.697 15:59:19 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:08:49.697 15:59:19 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:08:49.697 15:59:19 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:08:49.697 15:59:19 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:08:49.697 15:59:19 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:08:49.697 15:59:19 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:08:49.697 15:59:19 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:08:49.697 15:59:19 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:08:49.697 15:59:19 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:08:49.697 15:59:19 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:08:49.697 15:59:19 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:08:49.697 15:59:19 -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:49.697 15:59:19 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:08:49.697 15:59:19 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:08:49.697 15:59:19 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:08:49.697 15:59:19 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:08:49.697 15:59:19 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:08:49.697 15:59:19 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:08:49.697 15:59:19 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:08:49.697 15:59:19 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:08:49.697 15:59:19 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:08:49.697 15:59:19 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:08:49.697 15:59:19 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:08:49.697 15:59:19 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:08:49.697 15:59:19 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:08:49.697 15:59:19 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:08:49.697 15:59:19 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:08:49.697 15:59:19 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:08:49.697 15:59:19 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:49.697 15:59:19 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:08:49.697 15:59:19 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:08:49.697 15:59:19 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:08:49.697 15:59:19 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:08:49.697 15:59:19 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:49.697 15:59:19 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:08:49.697 15:59:19 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:08:49.697 15:59:19 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:08:49.697 15:59:19 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:08:49.697 15:59:19 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:08:49.697 15:59:19 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:08:49.697 15:59:19 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:08:49.697 15:59:19 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:08:49.697 15:59:19 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:08:49.697 15:59:19 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:08:49.697 15:59:19 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:08:49.697 15:59:19 -- common/build_config.sh@53 -- # CONFIG_URING_ZNS=n 00:08:49.697 15:59:19 -- common/build_config.sh@54 -- # CONFIG_WERROR=y 00:08:49.697 15:59:19 -- common/build_config.sh@55 -- # CONFIG_HAVE_LIBBSD=n 00:08:49.697 15:59:19 -- common/build_config.sh@56 -- # CONFIG_UBSAN=y 00:08:49.697 15:59:19 -- common/build_config.sh@57 -- # CONFIG_IPSEC_MB_DIR= 00:08:49.697 15:59:19 -- common/build_config.sh@58 -- # CONFIG_GOLANG=n 00:08:49.697 15:59:19 -- common/build_config.sh@59 -- # CONFIG_ISAL=y 00:08:49.697 15:59:19 -- common/build_config.sh@60 -- # CONFIG_IDXD_KERNEL=y 00:08:49.697 15:59:19 -- common/build_config.sh@61 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:49.697 15:59:19 -- common/build_config.sh@62 -- # CONFIG_RDMA_PROV=verbs 00:08:49.697 15:59:19 -- common/build_config.sh@63 -- # CONFIG_APPS=y 00:08:49.697 15:59:19 -- common/build_config.sh@64 -- # CONFIG_SHARED=y 00:08:49.697 15:59:19 -- common/build_config.sh@65 -- # CONFIG_FC_PATH= 00:08:49.697 15:59:19 -- common/build_config.sh@66 -- # CONFIG_DPDK_PKG_CONFIG=n 00:08:49.697 15:59:19 -- common/build_config.sh@67 -- # CONFIG_FC=n 00:08:49.697 15:59:19 -- common/build_config.sh@68 -- # CONFIG_AVAHI=n 00:08:49.697 15:59:19 -- common/build_config.sh@69 -- # CONFIG_FIO_PLUGIN=y 00:08:49.697 15:59:19 -- common/build_config.sh@70 -- # CONFIG_RAID5F=n 00:08:49.697 15:59:19 -- common/build_config.sh@71 -- # CONFIG_EXAMPLES=y 00:08:49.697 15:59:19 -- common/build_config.sh@72 -- # CONFIG_TESTS=y 00:08:49.697 15:59:19 -- common/build_config.sh@73 -- # CONFIG_CRYPTO_MLX5=n 00:08:49.697 15:59:19 -- common/build_config.sh@74 -- # CONFIG_MAX_LCORES= 00:08:49.697 15:59:19 -- common/build_config.sh@75 -- # CONFIG_IPSEC_MB=n 00:08:49.697 15:59:19 -- common/build_config.sh@76 -- # CONFIG_DEBUG=y 00:08:49.697 15:59:19 -- common/build_config.sh@77 -- # CONFIG_DPDK_COMPRESSDEV=n 00:08:49.697 15:59:19 -- common/build_config.sh@78 -- # CONFIG_CROSS_PREFIX= 00:08:49.697 15:59:19 -- common/build_config.sh@79 -- # CONFIG_URING=n 00:08:49.697 15:59:19 -- common/autotest_common.sh@48 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:49.697 15:59:19 -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common/applications.sh 00:08:49.697 15:59:19 -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:49.697 15:59:19 -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/common 00:08:49.697 15:59:19 -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:49.697 15:59:19 -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:49.697 15:59:19 -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app 00:08:49.697 15:59:19 -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:49.697 15:59:19 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:08:49.697 15:59:19 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:08:49.697 15:59:19 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:08:49.697 15:59:19 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:08:49.697 15:59:19 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:08:49.697 15:59:19 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:08:49.697 15:59:19 -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-phy-autotest/spdk/include/spdk/config.h ]] 00:08:49.697 15:59:19 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:08:49.697 #define SPDK_CONFIG_H 00:08:49.697 #define SPDK_CONFIG_APPS 1 00:08:49.697 #define SPDK_CONFIG_ARCH native 00:08:49.697 #undef SPDK_CONFIG_ASAN 00:08:49.697 #undef SPDK_CONFIG_AVAHI 00:08:49.697 #undef SPDK_CONFIG_CET 00:08:49.697 #define SPDK_CONFIG_COVERAGE 1 00:08:49.697 #define SPDK_CONFIG_CROSS_PREFIX 00:08:49.697 #undef SPDK_CONFIG_CRYPTO 00:08:49.697 #undef SPDK_CONFIG_CRYPTO_MLX5 00:08:49.697 #undef SPDK_CONFIG_CUSTOMOCF 00:08:49.697 #undef SPDK_CONFIG_DAOS 00:08:49.697 #define SPDK_CONFIG_DAOS_DIR 00:08:49.697 #define SPDK_CONFIG_DEBUG 1 00:08:49.697 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:08:49.697 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:49.697 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/include 00:08:49.697 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:49.697 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:08:49.697 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-phy-autotest/spdk/lib/env_dpdk 00:08:49.697 #define SPDK_CONFIG_EXAMPLES 1 00:08:49.697 #undef SPDK_CONFIG_FC 00:08:49.697 #define SPDK_CONFIG_FC_PATH 00:08:49.697 #define SPDK_CONFIG_FIO_PLUGIN 1 00:08:49.697 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:08:49.697 #undef SPDK_CONFIG_FUSE 00:08:49.697 #undef SPDK_CONFIG_FUZZER 00:08:49.698 #define SPDK_CONFIG_FUZZER_LIB 00:08:49.698 #undef SPDK_CONFIG_GOLANG 00:08:49.698 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:08:49.698 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:08:49.698 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:08:49.698 #undef SPDK_CONFIG_HAVE_LIBBSD 00:08:49.698 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:08:49.698 #define SPDK_CONFIG_IDXD 1 00:08:49.698 #define SPDK_CONFIG_IDXD_KERNEL 1 00:08:49.698 #undef SPDK_CONFIG_IPSEC_MB 00:08:49.698 #define SPDK_CONFIG_IPSEC_MB_DIR 00:08:49.698 #define SPDK_CONFIG_ISAL 1 00:08:49.698 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:08:49.698 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:08:49.698 #define SPDK_CONFIG_LIBDIR 00:08:49.698 #undef SPDK_CONFIG_LTO 00:08:49.698 #define SPDK_CONFIG_MAX_LCORES 00:08:49.698 #define SPDK_CONFIG_NVME_CUSE 1 00:08:49.698 #undef SPDK_CONFIG_OCF 00:08:49.698 #define SPDK_CONFIG_OCF_PATH 00:08:49.698 #define SPDK_CONFIG_OPENSSL_PATH 00:08:49.698 #undef SPDK_CONFIG_PGO_CAPTURE 00:08:49.698 #undef SPDK_CONFIG_PGO_USE 00:08:49.698 #define SPDK_CONFIG_PREFIX /usr/local 00:08:49.698 #undef SPDK_CONFIG_RAID5F 00:08:49.698 #undef SPDK_CONFIG_RBD 00:08:49.698 #define SPDK_CONFIG_RDMA 1 00:08:49.698 #define SPDK_CONFIG_RDMA_PROV verbs 00:08:49.698 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:08:49.698 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:08:49.698 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:08:49.698 #define SPDK_CONFIG_SHARED 1 00:08:49.698 #undef SPDK_CONFIG_SMA 00:08:49.698 #define SPDK_CONFIG_TESTS 1 00:08:49.698 #undef SPDK_CONFIG_TSAN 00:08:49.698 #define SPDK_CONFIG_UBLK 1 00:08:49.698 #define SPDK_CONFIG_UBSAN 1 00:08:49.698 #undef SPDK_CONFIG_UNIT_TESTS 00:08:49.698 #undef SPDK_CONFIG_URING 00:08:49.698 #define SPDK_CONFIG_URING_PATH 00:08:49.698 #undef SPDK_CONFIG_URING_ZNS 00:08:49.698 #undef SPDK_CONFIG_USDT 00:08:49.698 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:08:49.698 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:08:49.698 #undef SPDK_CONFIG_VFIO_USER 00:08:49.698 #define SPDK_CONFIG_VFIO_USER_DIR 00:08:49.698 #define SPDK_CONFIG_VHOST 1 00:08:49.698 #define SPDK_CONFIG_VIRTIO 1 00:08:49.698 #undef SPDK_CONFIG_VTUNE 00:08:49.698 #define SPDK_CONFIG_VTUNE_DIR 00:08:49.698 #define SPDK_CONFIG_WERROR 1 00:08:49.698 #define SPDK_CONFIG_WPDK_DIR 00:08:49.698 #undef SPDK_CONFIG_XNVME 00:08:49.698 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:08:49.698 15:59:19 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:08:49.698 15:59:19 -- common/autotest_common.sh@49 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:49.698 15:59:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.698 15:59:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.698 15:59:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.698 15:59:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.698 15:59:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.698 15:59:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.698 15:59:19 -- paths/export.sh@5 -- # export PATH 00:08:49.698 15:59:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.698 15:59:19 -- common/autotest_common.sh@50 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:49.698 15:59:19 -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/common 00:08:49.698 15:59:19 -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:49.698 15:59:19 -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm 00:08:49.698 15:59:19 -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/perf/pm/../../../ 00:08:49.698 15:59:19 -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-phy-autotest/spdk 00:08:49.698 15:59:19 -- pm/common@16 -- # TEST_TAG=N/A 00:08:49.698 15:59:19 -- pm/common@17 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/.run_test_name 00:08:49.698 15:59:19 -- common/autotest_common.sh@52 -- # : 1 00:08:49.698 15:59:19 -- common/autotest_common.sh@53 -- # export RUN_NIGHTLY 00:08:49.698 15:59:19 -- common/autotest_common.sh@56 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@57 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:08:49.698 15:59:19 -- common/autotest_common.sh@58 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@59 -- # export SPDK_RUN_VALGRIND 00:08:49.698 15:59:19 -- common/autotest_common.sh@60 -- # : 1 00:08:49.698 15:59:19 -- common/autotest_common.sh@61 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:08:49.698 15:59:19 -- common/autotest_common.sh@62 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@63 -- # export SPDK_TEST_UNITTEST 00:08:49.698 15:59:19 -- common/autotest_common.sh@64 -- # : 00:08:49.698 15:59:19 -- common/autotest_common.sh@65 -- # export SPDK_TEST_AUTOBUILD 00:08:49.698 15:59:19 -- common/autotest_common.sh@66 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@67 -- # export SPDK_TEST_RELEASE_BUILD 00:08:49.698 15:59:19 -- common/autotest_common.sh@68 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@69 -- # export SPDK_TEST_ISAL 00:08:49.698 15:59:19 -- common/autotest_common.sh@70 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@71 -- # export SPDK_TEST_ISCSI 00:08:49.698 15:59:19 -- common/autotest_common.sh@72 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@73 -- # export SPDK_TEST_ISCSI_INITIATOR 00:08:49.698 15:59:19 -- common/autotest_common.sh@74 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@75 -- # export SPDK_TEST_NVME 00:08:49.698 15:59:19 -- common/autotest_common.sh@76 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@77 -- # export SPDK_TEST_NVME_PMR 00:08:49.698 15:59:19 -- common/autotest_common.sh@78 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@79 -- # export SPDK_TEST_NVME_BP 00:08:49.698 15:59:19 -- common/autotest_common.sh@80 -- # : 1 00:08:49.698 15:59:19 -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME_CLI 00:08:49.698 15:59:19 -- common/autotest_common.sh@82 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_CUSE 00:08:49.698 15:59:19 -- common/autotest_common.sh@84 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_FDP 00:08:49.698 15:59:19 -- common/autotest_common.sh@86 -- # : 1 00:08:49.698 15:59:19 -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVMF 00:08:49.698 15:59:19 -- common/autotest_common.sh@88 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@89 -- # export SPDK_TEST_VFIOUSER 00:08:49.698 15:59:19 -- common/autotest_common.sh@90 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@91 -- # export SPDK_TEST_VFIOUSER_QEMU 00:08:49.698 15:59:19 -- common/autotest_common.sh@92 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@93 -- # export SPDK_TEST_FUZZER 00:08:49.698 15:59:19 -- common/autotest_common.sh@94 -- # : 0 00:08:49.698 15:59:19 -- common/autotest_common.sh@95 -- # export SPDK_TEST_FUZZER_SHORT 00:08:49.698 15:59:19 -- common/autotest_common.sh@96 -- # : rdma 00:08:49.698 15:59:19 -- common/autotest_common.sh@97 -- # export SPDK_TEST_NVMF_TRANSPORT 00:08:49.698 15:59:19 -- common/autotest_common.sh@98 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@99 -- # export SPDK_TEST_RBD 00:08:49.699 15:59:19 -- common/autotest_common.sh@100 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@101 -- # export SPDK_TEST_VHOST 00:08:49.699 15:59:19 -- common/autotest_common.sh@102 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@103 -- # export SPDK_TEST_BLOCKDEV 00:08:49.699 15:59:19 -- common/autotest_common.sh@104 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@105 -- # export SPDK_TEST_IOAT 00:08:49.699 15:59:19 -- common/autotest_common.sh@106 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@107 -- # export SPDK_TEST_BLOBFS 00:08:49.699 15:59:19 -- common/autotest_common.sh@108 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@109 -- # export SPDK_TEST_VHOST_INIT 00:08:49.699 15:59:19 -- common/autotest_common.sh@110 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@111 -- # export SPDK_TEST_LVOL 00:08:49.699 15:59:19 -- common/autotest_common.sh@112 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@113 -- # export SPDK_TEST_VBDEV_COMPRESS 00:08:49.699 15:59:19 -- common/autotest_common.sh@114 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@115 -- # export SPDK_RUN_ASAN 00:08:49.699 15:59:19 -- common/autotest_common.sh@116 -- # : 1 00:08:49.699 15:59:19 -- common/autotest_common.sh@117 -- # export SPDK_RUN_UBSAN 00:08:49.699 15:59:19 -- common/autotest_common.sh@118 -- # : /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:08:49.699 15:59:19 -- common/autotest_common.sh@119 -- # export SPDK_RUN_EXTERNAL_DPDK 00:08:49.699 15:59:19 -- common/autotest_common.sh@120 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@121 -- # export SPDK_RUN_NON_ROOT 00:08:49.699 15:59:19 -- common/autotest_common.sh@122 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@123 -- # export SPDK_TEST_CRYPTO 00:08:49.699 15:59:19 -- common/autotest_common.sh@124 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@125 -- # export SPDK_TEST_FTL 00:08:49.699 15:59:19 -- common/autotest_common.sh@126 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@127 -- # export SPDK_TEST_OCF 00:08:49.699 15:59:19 -- common/autotest_common.sh@128 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@129 -- # export SPDK_TEST_VMD 00:08:49.699 15:59:19 -- common/autotest_common.sh@130 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@131 -- # export SPDK_TEST_OPAL 00:08:49.699 15:59:19 -- common/autotest_common.sh@132 -- # : v22.11.4 00:08:49.699 15:59:19 -- common/autotest_common.sh@133 -- # export SPDK_TEST_NATIVE_DPDK 00:08:49.699 15:59:19 -- common/autotest_common.sh@134 -- # : true 00:08:49.699 15:59:19 -- common/autotest_common.sh@135 -- # export SPDK_AUTOTEST_X 00:08:49.699 15:59:19 -- common/autotest_common.sh@136 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@137 -- # export SPDK_TEST_RAID5 00:08:49.699 15:59:19 -- common/autotest_common.sh@138 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@139 -- # export SPDK_TEST_URING 00:08:49.699 15:59:19 -- common/autotest_common.sh@140 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@141 -- # export SPDK_TEST_USDT 00:08:49.699 15:59:19 -- common/autotest_common.sh@142 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@143 -- # export SPDK_TEST_USE_IGB_UIO 00:08:49.699 15:59:19 -- common/autotest_common.sh@144 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@145 -- # export SPDK_TEST_SCHEDULER 00:08:49.699 15:59:19 -- common/autotest_common.sh@146 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@147 -- # export SPDK_TEST_SCANBUILD 00:08:49.699 15:59:19 -- common/autotest_common.sh@148 -- # : mlx5 00:08:49.699 15:59:19 -- common/autotest_common.sh@149 -- # export SPDK_TEST_NVMF_NICS 00:08:49.699 15:59:19 -- common/autotest_common.sh@150 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@151 -- # export SPDK_TEST_SMA 00:08:49.699 15:59:19 -- common/autotest_common.sh@152 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@153 -- # export SPDK_TEST_DAOS 00:08:49.699 15:59:19 -- common/autotest_common.sh@154 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@155 -- # export SPDK_TEST_XNVME 00:08:49.699 15:59:19 -- common/autotest_common.sh@156 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@157 -- # export SPDK_TEST_ACCEL_DSA 00:08:49.699 15:59:19 -- common/autotest_common.sh@158 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@159 -- # export SPDK_TEST_ACCEL_IAA 00:08:49.699 15:59:19 -- common/autotest_common.sh@160 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@161 -- # export SPDK_TEST_ACCEL_IOAT 00:08:49.699 15:59:19 -- common/autotest_common.sh@163 -- # : 00:08:49.699 15:59:19 -- common/autotest_common.sh@164 -- # export SPDK_TEST_FUZZER_TARGET 00:08:49.699 15:59:19 -- common/autotest_common.sh@165 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@166 -- # export SPDK_TEST_NVMF_MDNS 00:08:49.699 15:59:19 -- common/autotest_common.sh@167 -- # : 0 00:08:49.699 15:59:19 -- common/autotest_common.sh@168 -- # export SPDK_JSONRPC_GO_CLIENT 00:08:49.699 15:59:19 -- common/autotest_common.sh@171 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:49.699 15:59:19 -- common/autotest_common.sh@171 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib 00:08:49.699 15:59:19 -- common/autotest_common.sh@172 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:49.699 15:59:19 -- common/autotest_common.sh@172 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib 00:08:49.699 15:59:19 -- common/autotest_common.sh@173 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:49.699 15:59:19 -- common/autotest_common.sh@173 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:49.699 15:59:19 -- common/autotest_common.sh@174 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:49.699 15:59:19 -- common/autotest_common.sh@174 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:08:49.699 15:59:19 -- common/autotest_common.sh@177 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:08:49.699 15:59:19 -- common/autotest_common.sh@177 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:08:49.699 15:59:19 -- common/autotest_common.sh@181 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:49.699 15:59:19 -- common/autotest_common.sh@181 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-phy-autotest/spdk/python 00:08:49.699 15:59:19 -- common/autotest_common.sh@185 -- # export PYTHONDONTWRITEBYTECODE=1 00:08:49.699 15:59:19 -- common/autotest_common.sh@185 -- # PYTHONDONTWRITEBYTECODE=1 00:08:49.699 15:59:19 -- common/autotest_common.sh@189 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:49.699 15:59:19 -- common/autotest_common.sh@189 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:08:49.699 15:59:19 -- common/autotest_common.sh@190 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:49.699 15:59:19 -- common/autotest_common.sh@190 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:08:49.699 15:59:19 -- common/autotest_common.sh@194 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:08:49.699 15:59:19 -- common/autotest_common.sh@195 -- # rm -rf /var/tmp/asan_suppression_file 00:08:49.699 15:59:19 -- common/autotest_common.sh@196 -- # cat 00:08:49.699 15:59:19 -- common/autotest_common.sh@222 -- # echo leak:libfuse3.so 00:08:49.699 15:59:19 -- common/autotest_common.sh@224 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:49.699 15:59:19 -- common/autotest_common.sh@224 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:08:49.699 15:59:19 -- common/autotest_common.sh@226 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:49.699 15:59:19 -- common/autotest_common.sh@226 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:08:49.699 15:59:19 -- common/autotest_common.sh@228 -- # '[' -z /var/spdk/dependencies ']' 00:08:49.699 15:59:19 -- common/autotest_common.sh@231 -- # export DEPENDENCY_DIR 00:08:49.699 15:59:19 -- common/autotest_common.sh@235 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:49.699 15:59:19 -- common/autotest_common.sh@235 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin 00:08:49.699 15:59:19 -- common/autotest_common.sh@236 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:49.699 15:59:19 -- common/autotest_common.sh@236 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples 00:08:49.699 15:59:19 -- common/autotest_common.sh@239 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:49.699 15:59:19 -- common/autotest_common.sh@239 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:08:49.699 15:59:19 -- common/autotest_common.sh@240 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:49.699 15:59:19 -- common/autotest_common.sh@240 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:08:49.699 15:59:19 -- common/autotest_common.sh@242 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:49.700 15:59:19 -- common/autotest_common.sh@242 -- # AR_TOOL=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:08:49.700 15:59:19 -- common/autotest_common.sh@245 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:49.700 15:59:19 -- common/autotest_common.sh@245 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:08:49.700 15:59:19 -- common/autotest_common.sh@247 -- # _LCOV_MAIN=0 00:08:49.700 15:59:19 -- common/autotest_common.sh@248 -- # _LCOV_LLVM=1 00:08:49.700 15:59:19 -- common/autotest_common.sh@249 -- # _LCOV= 00:08:49.700 15:59:19 -- common/autotest_common.sh@250 -- # [[ '' == *clang* ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@250 -- # [[ 0 -eq 1 ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@252 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:08:49.700 15:59:19 -- common/autotest_common.sh@253 -- # _lcov_opt[_LCOV_MAIN]= 00:08:49.700 15:59:19 -- common/autotest_common.sh@255 -- # lcov_opt= 00:08:49.700 15:59:19 -- common/autotest_common.sh@258 -- # '[' 0 -eq 0 ']' 00:08:49.700 15:59:19 -- common/autotest_common.sh@259 -- # export valgrind= 00:08:49.700 15:59:19 -- common/autotest_common.sh@259 -- # valgrind= 00:08:49.700 15:59:19 -- common/autotest_common.sh@265 -- # uname -s 00:08:49.700 15:59:19 -- common/autotest_common.sh@265 -- # '[' Linux = Linux ']' 00:08:49.700 15:59:19 -- common/autotest_common.sh@266 -- # HUGEMEM=4096 00:08:49.700 15:59:19 -- common/autotest_common.sh@267 -- # export CLEAR_HUGE=yes 00:08:49.700 15:59:19 -- common/autotest_common.sh@267 -- # CLEAR_HUGE=yes 00:08:49.700 15:59:19 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@275 -- # MAKE=make 00:08:49.700 15:59:19 -- common/autotest_common.sh@276 -- # MAKEFLAGS=-j112 00:08:49.700 15:59:19 -- common/autotest_common.sh@292 -- # export HUGEMEM=4096 00:08:49.700 15:59:19 -- common/autotest_common.sh@292 -- # HUGEMEM=4096 00:08:49.700 15:59:19 -- common/autotest_common.sh@294 -- # '[' -z /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output ']' 00:08:49.700 15:59:19 -- common/autotest_common.sh@299 -- # NO_HUGE=() 00:08:49.700 15:59:19 -- common/autotest_common.sh@300 -- # TEST_MODE= 00:08:49.700 15:59:19 -- common/autotest_common.sh@301 -- # for i in "$@" 00:08:49.700 15:59:19 -- common/autotest_common.sh@302 -- # case "$i" in 00:08:49.700 15:59:19 -- common/autotest_common.sh@307 -- # TEST_TRANSPORT=rdma 00:08:49.700 15:59:19 -- common/autotest_common.sh@319 -- # [[ -z 1211590 ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@319 -- # kill -0 1211590 00:08:49.700 15:59:19 -- common/autotest_common.sh@1675 -- # set_test_storage 2147483648 00:08:49.700 15:59:19 -- common/autotest_common.sh@329 -- # [[ -v testdir ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@331 -- # local requested_size=2147483648 00:08:49.700 15:59:19 -- common/autotest_common.sh@332 -- # local mount target_dir 00:08:49.700 15:59:19 -- common/autotest_common.sh@334 -- # local -A mounts fss sizes avails uses 00:08:49.700 15:59:19 -- common/autotest_common.sh@335 -- # local source fs size avail mount use 00:08:49.700 15:59:19 -- common/autotest_common.sh@337 -- # local storage_fallback storage_candidates 00:08:49.700 15:59:19 -- common/autotest_common.sh@339 -- # mktemp -udt spdk.XXXXXX 00:08:49.700 15:59:19 -- common/autotest_common.sh@339 -- # storage_fallback=/tmp/spdk.QI7617 00:08:49.700 15:59:19 -- common/autotest_common.sh@344 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:08:49.700 15:59:19 -- common/autotest_common.sh@346 -- # [[ -n '' ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@351 -- # [[ -n '' ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@356 -- # mkdir -p /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target /tmp/spdk.QI7617/tests/target /tmp/spdk.QI7617 00:08:49.700 15:59:19 -- common/autotest_common.sh@359 -- # requested_size=2214592512 00:08:49.700 15:59:19 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:49.700 15:59:19 -- common/autotest_common.sh@328 -- # df -T 00:08:49.700 15:59:19 -- common/autotest_common.sh@328 -- # grep -v Filesystem 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_devtmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # fss["$mount"]=devtmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # avails["$mount"]=67108864 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # sizes["$mount"]=67108864 00:08:49.700 15:59:19 -- common/autotest_common.sh@364 -- # uses["$mount"]=0 00:08:49.700 15:59:19 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # mounts["$mount"]=/dev/pmem0 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # fss["$mount"]=ext2 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # avails["$mount"]=4096 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # sizes["$mount"]=5284429824 00:08:49.700 15:59:19 -- common/autotest_common.sh@364 -- # uses["$mount"]=5284425728 00:08:49.700 15:59:19 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # mounts["$mount"]=spdk_root 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # fss["$mount"]=overlay 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # avails["$mount"]=54788575232 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # sizes["$mount"]=61730586624 00:08:49.700 15:59:19 -- common/autotest_common.sh@364 -- # uses["$mount"]=6942011392 00:08:49.700 15:59:19 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # avails["$mount"]=30864035840 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865293312 00:08:49.700 15:59:19 -- common/autotest_common.sh@364 -- # uses["$mount"]=1257472 00:08:49.700 15:59:19 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # avails["$mount"]=12336680960 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # sizes["$mount"]=12346118144 00:08:49.700 15:59:19 -- common/autotest_common.sh@364 -- # uses["$mount"]=9437184 00:08:49.700 15:59:19 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # avails["$mount"]=30865076224 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # sizes["$mount"]=30865293312 00:08:49.700 15:59:19 -- common/autotest_common.sh@364 -- # uses["$mount"]=217088 00:08:49.700 15:59:19 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # mounts["$mount"]=tmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@362 -- # fss["$mount"]=tmpfs 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # avails["$mount"]=6173044736 00:08:49.700 15:59:19 -- common/autotest_common.sh@363 -- # sizes["$mount"]=6173057024 00:08:49.700 15:59:19 -- common/autotest_common.sh@364 -- # uses["$mount"]=12288 00:08:49.700 15:59:19 -- common/autotest_common.sh@361 -- # read -r source fs size use avail _ mount 00:08:49.700 15:59:19 -- common/autotest_common.sh@367 -- # printf '* Looking for test storage...\n' 00:08:49.700 * Looking for test storage... 00:08:49.700 15:59:19 -- common/autotest_common.sh@369 -- # local target_space new_size 00:08:49.700 15:59:19 -- common/autotest_common.sh@370 -- # for target_dir in "${storage_candidates[@]}" 00:08:49.700 15:59:19 -- common/autotest_common.sh@373 -- # df /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:49.700 15:59:19 -- common/autotest_common.sh@373 -- # awk '$1 !~ /Filesystem/{print $6}' 00:08:49.700 15:59:19 -- common/autotest_common.sh@373 -- # mount=/ 00:08:49.700 15:59:19 -- common/autotest_common.sh@375 -- # target_space=54788575232 00:08:49.700 15:59:19 -- common/autotest_common.sh@376 -- # (( target_space == 0 || target_space < requested_size )) 00:08:49.700 15:59:19 -- common/autotest_common.sh@379 -- # (( target_space >= requested_size )) 00:08:49.700 15:59:19 -- common/autotest_common.sh@381 -- # [[ overlay == tmpfs ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@381 -- # [[ overlay == ramfs ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@381 -- # [[ / == / ]] 00:08:49.700 15:59:19 -- common/autotest_common.sh@382 -- # new_size=9156603904 00:08:49.700 15:59:19 -- common/autotest_common.sh@383 -- # (( new_size * 100 / sizes[/] > 95 )) 00:08:49.700 15:59:19 -- common/autotest_common.sh@388 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:49.700 15:59:19 -- common/autotest_common.sh@388 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:49.700 15:59:19 -- common/autotest_common.sh@389 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:49.700 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:08:49.700 15:59:19 -- common/autotest_common.sh@390 -- # return 0 00:08:49.701 15:59:19 -- common/autotest_common.sh@1677 -- # set -o errtrace 00:08:49.701 15:59:19 -- common/autotest_common.sh@1678 -- # shopt -s extdebug 00:08:49.701 15:59:19 -- common/autotest_common.sh@1679 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:08:49.701 15:59:19 -- common/autotest_common.sh@1681 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:08:49.701 15:59:19 -- common/autotest_common.sh@1682 -- # true 00:08:49.701 15:59:19 -- common/autotest_common.sh@1684 -- # xtrace_fd 00:08:49.701 15:59:19 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:08:49.701 15:59:19 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:08:49.701 15:59:19 -- common/autotest_common.sh@27 -- # exec 00:08:49.701 15:59:19 -- common/autotest_common.sh@29 -- # exec 00:08:49.701 15:59:19 -- common/autotest_common.sh@31 -- # xtrace_restore 00:08:49.701 15:59:19 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:08:49.701 15:59:19 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:08:49.701 15:59:19 -- common/autotest_common.sh@18 -- # set -x 00:08:49.701 15:59:19 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:08:49.701 15:59:19 -- common/autotest_common.sh@1690 -- # lcov --version 00:08:49.701 15:59:19 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:08:49.701 15:59:19 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:08:49.701 15:59:19 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:08:49.701 15:59:19 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:08:49.701 15:59:19 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:08:49.701 15:59:19 -- scripts/common.sh@335 -- # IFS=.-: 00:08:49.701 15:59:19 -- scripts/common.sh@335 -- # read -ra ver1 00:08:49.701 15:59:19 -- scripts/common.sh@336 -- # IFS=.-: 00:08:49.701 15:59:19 -- scripts/common.sh@336 -- # read -ra ver2 00:08:49.701 15:59:19 -- scripts/common.sh@337 -- # local 'op=<' 00:08:49.701 15:59:19 -- scripts/common.sh@339 -- # ver1_l=2 00:08:49.701 15:59:19 -- scripts/common.sh@340 -- # ver2_l=1 00:08:49.701 15:59:19 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:08:49.701 15:59:19 -- scripts/common.sh@343 -- # case "$op" in 00:08:49.701 15:59:19 -- scripts/common.sh@344 -- # : 1 00:08:49.701 15:59:19 -- scripts/common.sh@363 -- # (( v = 0 )) 00:08:49.701 15:59:19 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:49.701 15:59:19 -- scripts/common.sh@364 -- # decimal 1 00:08:49.701 15:59:19 -- scripts/common.sh@352 -- # local d=1 00:08:49.701 15:59:19 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:49.701 15:59:19 -- scripts/common.sh@354 -- # echo 1 00:08:49.701 15:59:19 -- scripts/common.sh@364 -- # ver1[v]=1 00:08:49.701 15:59:19 -- scripts/common.sh@365 -- # decimal 2 00:08:49.701 15:59:19 -- scripts/common.sh@352 -- # local d=2 00:08:49.701 15:59:19 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:49.701 15:59:19 -- scripts/common.sh@354 -- # echo 2 00:08:49.701 15:59:19 -- scripts/common.sh@365 -- # ver2[v]=2 00:08:49.701 15:59:19 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:08:49.701 15:59:19 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:08:49.701 15:59:19 -- scripts/common.sh@367 -- # return 0 00:08:49.701 15:59:19 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:49.701 15:59:19 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:08:49.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.701 --rc genhtml_branch_coverage=1 00:08:49.701 --rc genhtml_function_coverage=1 00:08:49.701 --rc genhtml_legend=1 00:08:49.701 --rc geninfo_all_blocks=1 00:08:49.701 --rc geninfo_unexecuted_blocks=1 00:08:49.701 00:08:49.701 ' 00:08:49.701 15:59:19 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:08:49.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.701 --rc genhtml_branch_coverage=1 00:08:49.701 --rc genhtml_function_coverage=1 00:08:49.701 --rc genhtml_legend=1 00:08:49.701 --rc geninfo_all_blocks=1 00:08:49.701 --rc geninfo_unexecuted_blocks=1 00:08:49.701 00:08:49.701 ' 00:08:49.701 15:59:19 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:08:49.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.701 --rc genhtml_branch_coverage=1 00:08:49.701 --rc genhtml_function_coverage=1 00:08:49.701 --rc genhtml_legend=1 00:08:49.701 --rc geninfo_all_blocks=1 00:08:49.701 --rc geninfo_unexecuted_blocks=1 00:08:49.701 00:08:49.701 ' 00:08:49.701 15:59:19 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:08:49.701 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:49.701 --rc genhtml_branch_coverage=1 00:08:49.701 --rc genhtml_function_coverage=1 00:08:49.701 --rc genhtml_legend=1 00:08:49.701 --rc geninfo_all_blocks=1 00:08:49.701 --rc geninfo_unexecuted_blocks=1 00:08:49.701 00:08:49.701 ' 00:08:49.701 15:59:19 -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:08:49.701 15:59:19 -- nvmf/common.sh@7 -- # uname -s 00:08:49.701 15:59:19 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:49.701 15:59:19 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:49.701 15:59:19 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:49.701 15:59:19 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:49.701 15:59:19 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:49.701 15:59:19 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:49.701 15:59:19 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:49.701 15:59:19 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:49.701 15:59:19 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:49.701 15:59:19 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:49.701 15:59:19 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:08:49.701 15:59:19 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:08:49.701 15:59:19 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:49.701 15:59:19 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:49.701 15:59:19 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:49.701 15:59:19 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:08:49.701 15:59:19 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:49.701 15:59:19 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:49.701 15:59:19 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:49.701 15:59:19 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.701 15:59:19 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.701 15:59:19 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.701 15:59:19 -- paths/export.sh@5 -- # export PATH 00:08:49.702 15:59:19 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:49.702 15:59:19 -- nvmf/common.sh@46 -- # : 0 00:08:49.702 15:59:19 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:08:49.702 15:59:19 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:08:49.702 15:59:19 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:08:49.702 15:59:19 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:49.702 15:59:19 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:49.702 15:59:19 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:08:49.702 15:59:19 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:08:49.702 15:59:19 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:08:49.702 15:59:19 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:08:49.702 15:59:19 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:08:49.702 15:59:19 -- target/filesystem.sh@15 -- # nvmftestinit 00:08:49.702 15:59:19 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:08:49.702 15:59:19 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:49.702 15:59:19 -- nvmf/common.sh@436 -- # prepare_net_devs 00:08:49.702 15:59:19 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:08:49.702 15:59:19 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:08:49.702 15:59:19 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:49.702 15:59:19 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:49.702 15:59:19 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:49.702 15:59:19 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:08:49.702 15:59:19 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:08:49.702 15:59:19 -- nvmf/common.sh@284 -- # xtrace_disable 00:08:49.702 15:59:19 -- common/autotest_common.sh@10 -- # set +x 00:08:56.276 15:59:26 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:08:56.276 15:59:26 -- nvmf/common.sh@290 -- # pci_devs=() 00:08:56.276 15:59:26 -- nvmf/common.sh@290 -- # local -a pci_devs 00:08:56.276 15:59:26 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:08:56.276 15:59:26 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:08:56.276 15:59:26 -- nvmf/common.sh@292 -- # pci_drivers=() 00:08:56.277 15:59:26 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:08:56.277 15:59:26 -- nvmf/common.sh@294 -- # net_devs=() 00:08:56.277 15:59:26 -- nvmf/common.sh@294 -- # local -ga net_devs 00:08:56.277 15:59:26 -- nvmf/common.sh@295 -- # e810=() 00:08:56.277 15:59:26 -- nvmf/common.sh@295 -- # local -ga e810 00:08:56.277 15:59:26 -- nvmf/common.sh@296 -- # x722=() 00:08:56.277 15:59:26 -- nvmf/common.sh@296 -- # local -ga x722 00:08:56.277 15:59:26 -- nvmf/common.sh@297 -- # mlx=() 00:08:56.277 15:59:26 -- nvmf/common.sh@297 -- # local -ga mlx 00:08:56.277 15:59:26 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.277 15:59:26 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:08:56.277 15:59:26 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:08:56.277 15:59:26 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:08:56.277 15:59:26 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:08:56.277 15:59:26 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:08:56.277 15:59:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:08:56.277 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:08:56.277 15:59:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:56.277 15:59:26 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:08:56.277 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:08:56.277 15:59:26 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:08:56.277 15:59:26 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:08:56.277 15:59:26 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.277 15:59:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:56.277 15:59:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.277 15:59:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:08:56.277 Found net devices under 0000:d9:00.0: mlx_0_0 00:08:56.277 15:59:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.277 15:59:26 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.277 15:59:26 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:08:56.277 15:59:26 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.277 15:59:26 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:08:56.277 Found net devices under 0000:d9:00.1: mlx_0_1 00:08:56.277 15:59:26 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.277 15:59:26 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:08:56.277 15:59:26 -- nvmf/common.sh@402 -- # is_hw=yes 00:08:56.277 15:59:26 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@408 -- # rdma_device_init 00:08:56.277 15:59:26 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:08:56.277 15:59:26 -- nvmf/common.sh@57 -- # uname 00:08:56.277 15:59:26 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:08:56.277 15:59:26 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:08:56.277 15:59:26 -- nvmf/common.sh@62 -- # modprobe ib_core 00:08:56.277 15:59:26 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:08:56.277 15:59:26 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:08:56.277 15:59:26 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:08:56.277 15:59:26 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:08:56.277 15:59:26 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:08:56.277 15:59:26 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:08:56.277 15:59:26 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:08:56.277 15:59:26 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:08:56.277 15:59:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:56.277 15:59:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:56.277 15:59:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:56.277 15:59:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:56.277 15:59:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:56.277 15:59:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:56.277 15:59:26 -- nvmf/common.sh@104 -- # continue 2 00:08:56.277 15:59:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:56.277 15:59:26 -- nvmf/common.sh@104 -- # continue 2 00:08:56.277 15:59:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:56.277 15:59:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:08:56.277 15:59:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:56.277 15:59:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:56.277 15:59:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:56.277 15:59:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:56.277 15:59:26 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:08:56.277 15:59:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:08:56.277 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:56.277 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:08:56.277 altname enp217s0f0np0 00:08:56.277 altname ens818f0np0 00:08:56.277 inet 192.168.100.8/24 scope global mlx_0_0 00:08:56.277 valid_lft forever preferred_lft forever 00:08:56.277 15:59:26 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:08:56.277 15:59:26 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:08:56.277 15:59:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:56.277 15:59:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:56.277 15:59:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:56.277 15:59:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:56.277 15:59:26 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:08:56.277 15:59:26 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:08:56.277 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:08:56.277 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:08:56.277 altname enp217s0f1np1 00:08:56.277 altname ens818f1np1 00:08:56.277 inet 192.168.100.9/24 scope global mlx_0_1 00:08:56.277 valid_lft forever preferred_lft forever 00:08:56.277 15:59:26 -- nvmf/common.sh@410 -- # return 0 00:08:56.277 15:59:26 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:08:56.277 15:59:26 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:08:56.277 15:59:26 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:08:56.277 15:59:26 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:08:56.277 15:59:26 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:08:56.277 15:59:26 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:08:56.277 15:59:26 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:08:56.277 15:59:26 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:08:56.277 15:59:26 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:08:56.277 15:59:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:08:56.277 15:59:26 -- nvmf/common.sh@104 -- # continue 2 00:08:56.277 15:59:26 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:08:56.277 15:59:26 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:08:56.277 15:59:26 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:08:56.277 15:59:26 -- nvmf/common.sh@104 -- # continue 2 00:08:56.277 15:59:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:56.277 15:59:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:08:56.277 15:59:26 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:08:56.277 15:59:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:08:56.277 15:59:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:56.277 15:59:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:56.278 15:59:26 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:08:56.278 15:59:26 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:08:56.278 15:59:26 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:08:56.278 15:59:26 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:08:56.278 15:59:26 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:08:56.278 15:59:26 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:08:56.278 15:59:26 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:08:56.278 192.168.100.9' 00:08:56.278 15:59:26 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:08:56.278 192.168.100.9' 00:08:56.278 15:59:26 -- nvmf/common.sh@445 -- # head -n 1 00:08:56.278 15:59:26 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:08:56.278 15:59:26 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:08:56.278 192.168.100.9' 00:08:56.278 15:59:26 -- nvmf/common.sh@446 -- # tail -n +2 00:08:56.278 15:59:26 -- nvmf/common.sh@446 -- # head -n 1 00:08:56.278 15:59:26 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:08:56.278 15:59:26 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:08:56.278 15:59:26 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:08:56.278 15:59:26 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:08:56.278 15:59:26 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:08:56.278 15:59:26 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:08:56.278 15:59:26 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:08:56.278 15:59:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:08:56.278 15:59:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:08:56.278 15:59:26 -- common/autotest_common.sh@10 -- # set +x 00:08:56.278 ************************************ 00:08:56.278 START TEST nvmf_filesystem_no_in_capsule 00:08:56.278 ************************************ 00:08:56.278 15:59:26 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 0 00:08:56.278 15:59:26 -- target/filesystem.sh@47 -- # in_capsule=0 00:08:56.278 15:59:26 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:56.278 15:59:26 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:08:56.278 15:59:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:56.278 15:59:26 -- common/autotest_common.sh@10 -- # set +x 00:08:56.278 15:59:26 -- nvmf/common.sh@469 -- # nvmfpid=1215012 00:08:56.278 15:59:26 -- nvmf/common.sh@470 -- # waitforlisten 1215012 00:08:56.278 15:59:26 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:56.278 15:59:26 -- common/autotest_common.sh@829 -- # '[' -z 1215012 ']' 00:08:56.278 15:59:26 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.278 15:59:26 -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:56.278 15:59:26 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.278 15:59:26 -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:56.278 15:59:26 -- common/autotest_common.sh@10 -- # set +x 00:08:56.278 [2024-11-20 15:59:26.904832] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:08:56.278 [2024-11-20 15:59:26.904893] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.278 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.278 [2024-11-20 15:59:26.976720] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:56.278 [2024-11-20 15:59:27.017182] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:08:56.278 [2024-11-20 15:59:27.017314] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.278 [2024-11-20 15:59:27.017325] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.278 [2024-11-20 15:59:27.017334] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.278 [2024-11-20 15:59:27.017426] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:08:56.278 [2024-11-20 15:59:27.017535] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:08:56.278 [2024-11-20 15:59:27.017586] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:08:56.278 [2024-11-20 15:59:27.017588] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.215 15:59:27 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:57.215 15:59:27 -- common/autotest_common.sh@862 -- # return 0 00:08:57.215 15:59:27 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:08:57.215 15:59:27 -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:57.215 15:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:57.215 15:59:27 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:57.215 15:59:27 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:57.215 15:59:27 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:08:57.215 15:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.215 15:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:57.215 [2024-11-20 15:59:27.773966] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:08:57.215 [2024-11-20 15:59:27.794885] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xb800f0/0xb845c0) succeed. 00:08:57.215 [2024-11-20 15:59:27.804129] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xb81690/0xbc5c60) succeed. 00:08:57.215 15:59:27 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.215 15:59:27 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:57.215 15:59:27 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.215 15:59:27 -- common/autotest_common.sh@10 -- # set +x 00:08:57.475 Malloc1 00:08:57.475 15:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.475 15:59:28 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:57.475 15:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.475 15:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.475 15:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.475 15:59:28 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:57.475 15:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.475 15:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.475 15:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.475 15:59:28 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:08:57.475 15:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.475 15:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.475 [2024-11-20 15:59:28.050803] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:08:57.475 15:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.475 15:59:28 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:57.475 15:59:28 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:08:57.475 15:59:28 -- common/autotest_common.sh@1368 -- # local bdev_info 00:08:57.475 15:59:28 -- common/autotest_common.sh@1369 -- # local bs 00:08:57.475 15:59:28 -- common/autotest_common.sh@1370 -- # local nb 00:08:57.475 15:59:28 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:57.475 15:59:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.475 15:59:28 -- common/autotest_common.sh@10 -- # set +x 00:08:57.475 15:59:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.475 15:59:28 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:08:57.475 { 00:08:57.475 "name": "Malloc1", 00:08:57.475 "aliases": [ 00:08:57.475 "6a988d33-4cc8-4938-b3b1-72a5f55700bb" 00:08:57.475 ], 00:08:57.475 "product_name": "Malloc disk", 00:08:57.475 "block_size": 512, 00:08:57.475 "num_blocks": 1048576, 00:08:57.475 "uuid": "6a988d33-4cc8-4938-b3b1-72a5f55700bb", 00:08:57.475 "assigned_rate_limits": { 00:08:57.475 "rw_ios_per_sec": 0, 00:08:57.475 "rw_mbytes_per_sec": 0, 00:08:57.475 "r_mbytes_per_sec": 0, 00:08:57.475 "w_mbytes_per_sec": 0 00:08:57.475 }, 00:08:57.475 "claimed": true, 00:08:57.475 "claim_type": "exclusive_write", 00:08:57.475 "zoned": false, 00:08:57.475 "supported_io_types": { 00:08:57.475 "read": true, 00:08:57.475 "write": true, 00:08:57.475 "unmap": true, 00:08:57.475 "write_zeroes": true, 00:08:57.475 "flush": true, 00:08:57.475 "reset": true, 00:08:57.475 "compare": false, 00:08:57.475 "compare_and_write": false, 00:08:57.475 "abort": true, 00:08:57.475 "nvme_admin": false, 00:08:57.475 "nvme_io": false 00:08:57.475 }, 00:08:57.475 "memory_domains": [ 00:08:57.475 { 00:08:57.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:57.475 "dma_device_type": 2 00:08:57.475 } 00:08:57.475 ], 00:08:57.475 "driver_specific": {} 00:08:57.475 } 00:08:57.475 ]' 00:08:57.475 15:59:28 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:08:57.475 15:59:28 -- common/autotest_common.sh@1372 -- # bs=512 00:08:57.475 15:59:28 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:08:57.475 15:59:28 -- common/autotest_common.sh@1373 -- # nb=1048576 00:08:57.475 15:59:28 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:08:57.475 15:59:28 -- common/autotest_common.sh@1377 -- # echo 512 00:08:57.475 15:59:28 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:57.475 15:59:28 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:08:58.413 15:59:29 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:58.413 15:59:29 -- common/autotest_common.sh@1187 -- # local i=0 00:08:58.413 15:59:29 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:08:58.413 15:59:29 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:08:58.413 15:59:29 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:00.949 15:59:31 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:00.949 15:59:31 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:00.949 15:59:31 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:00.949 15:59:31 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:00.949 15:59:31 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:00.949 15:59:31 -- common/autotest_common.sh@1197 -- # return 0 00:09:00.949 15:59:31 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:00.949 15:59:31 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:00.949 15:59:31 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:00.949 15:59:31 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:00.949 15:59:31 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:00.949 15:59:31 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:00.949 15:59:31 -- setup/common.sh@80 -- # echo 536870912 00:09:00.949 15:59:31 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:00.949 15:59:31 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:00.949 15:59:31 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:00.949 15:59:31 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:00.949 15:59:31 -- target/filesystem.sh@69 -- # partprobe 00:09:00.949 15:59:31 -- target/filesystem.sh@70 -- # sleep 1 00:09:01.889 15:59:32 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:01.889 15:59:32 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:01.889 15:59:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:01.889 15:59:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.889 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:09:01.889 ************************************ 00:09:01.889 START TEST filesystem_ext4 00:09:01.889 ************************************ 00:09:01.889 15:59:32 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:01.889 15:59:32 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:01.889 15:59:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:01.889 15:59:32 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:01.889 15:59:32 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:09:01.889 15:59:32 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:01.889 15:59:32 -- common/autotest_common.sh@914 -- # local i=0 00:09:01.889 15:59:32 -- common/autotest_common.sh@915 -- # local force 00:09:01.889 15:59:32 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:09:01.889 15:59:32 -- common/autotest_common.sh@918 -- # force=-F 00:09:01.889 15:59:32 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:01.889 mke2fs 1.47.0 (5-Feb-2023) 00:09:01.889 Discarding device blocks: 0/522240 done 00:09:01.889 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:01.889 Filesystem UUID: c607dea6-2fd9-4de3-b962-6953bd271c2f 00:09:01.889 Superblock backups stored on blocks: 00:09:01.889 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:01.889 00:09:01.889 Allocating group tables: 0/64 done 00:09:01.889 Writing inode tables: 0/64 done 00:09:01.889 Creating journal (8192 blocks): done 00:09:01.889 Writing superblocks and filesystem accounting information: 0/64 done 00:09:01.889 00:09:01.889 15:59:32 -- common/autotest_common.sh@931 -- # return 0 00:09:01.889 15:59:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:01.889 15:59:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:01.889 15:59:32 -- target/filesystem.sh@25 -- # sync 00:09:01.889 15:59:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:01.889 15:59:32 -- target/filesystem.sh@27 -- # sync 00:09:01.889 15:59:32 -- target/filesystem.sh@29 -- # i=0 00:09:01.889 15:59:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:01.889 15:59:32 -- target/filesystem.sh@37 -- # kill -0 1215012 00:09:01.889 15:59:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:01.889 15:59:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:01.889 15:59:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:01.889 15:59:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:01.889 00:09:01.889 real 0m0.198s 00:09:01.889 user 0m0.031s 00:09:01.889 sys 0m0.077s 00:09:01.889 15:59:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:01.889 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:09:01.889 ************************************ 00:09:01.889 END TEST filesystem_ext4 00:09:01.889 ************************************ 00:09:01.889 15:59:32 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:01.889 15:59:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:01.889 15:59:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:01.890 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:09:01.890 ************************************ 00:09:01.890 START TEST filesystem_btrfs 00:09:01.890 ************************************ 00:09:01.890 15:59:32 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:01.890 15:59:32 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:01.890 15:59:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:01.890 15:59:32 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:01.890 15:59:32 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:09:01.890 15:59:32 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:01.890 15:59:32 -- common/autotest_common.sh@914 -- # local i=0 00:09:01.890 15:59:32 -- common/autotest_common.sh@915 -- # local force 00:09:01.890 15:59:32 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:09:01.890 15:59:32 -- common/autotest_common.sh@920 -- # force=-f 00:09:01.890 15:59:32 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:02.149 btrfs-progs v6.8.1 00:09:02.149 See https://btrfs.readthedocs.io for more information. 00:09:02.149 00:09:02.149 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:02.149 NOTE: several default settings have changed in version 5.15, please make sure 00:09:02.149 this does not affect your deployments: 00:09:02.149 - DUP for metadata (-m dup) 00:09:02.149 - enabled no-holes (-O no-holes) 00:09:02.149 - enabled free-space-tree (-R free-space-tree) 00:09:02.149 00:09:02.149 Label: (null) 00:09:02.149 UUID: 114c37bb-3d75-486f-a5ce-44c8a8682338 00:09:02.149 Node size: 16384 00:09:02.149 Sector size: 4096 (CPU page size: 4096) 00:09:02.149 Filesystem size: 510.00MiB 00:09:02.149 Block group profiles: 00:09:02.149 Data: single 8.00MiB 00:09:02.149 Metadata: DUP 32.00MiB 00:09:02.149 System: DUP 8.00MiB 00:09:02.149 SSD detected: yes 00:09:02.149 Zoned device: no 00:09:02.149 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:02.149 Checksum: crc32c 00:09:02.149 Number of devices: 1 00:09:02.149 Devices: 00:09:02.149 ID SIZE PATH 00:09:02.149 1 510.00MiB /dev/nvme0n1p1 00:09:02.149 00:09:02.149 15:59:32 -- common/autotest_common.sh@931 -- # return 0 00:09:02.149 15:59:32 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:02.149 15:59:32 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:02.149 15:59:32 -- target/filesystem.sh@25 -- # sync 00:09:02.149 15:59:32 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:02.149 15:59:32 -- target/filesystem.sh@27 -- # sync 00:09:02.149 15:59:32 -- target/filesystem.sh@29 -- # i=0 00:09:02.149 15:59:32 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:02.149 15:59:32 -- target/filesystem.sh@37 -- # kill -0 1215012 00:09:02.149 15:59:32 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:02.149 15:59:32 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:02.149 15:59:32 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:02.149 15:59:32 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:02.149 00:09:02.149 real 0m0.245s 00:09:02.149 user 0m0.029s 00:09:02.149 sys 0m0.128s 00:09:02.149 15:59:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.149 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.149 ************************************ 00:09:02.149 END TEST filesystem_btrfs 00:09:02.149 ************************************ 00:09:02.149 15:59:32 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:02.149 15:59:32 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:02.149 15:59:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:02.149 15:59:32 -- common/autotest_common.sh@10 -- # set +x 00:09:02.149 ************************************ 00:09:02.149 START TEST filesystem_xfs 00:09:02.149 ************************************ 00:09:02.149 15:59:32 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:09:02.149 15:59:32 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:02.149 15:59:32 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:02.149 15:59:32 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:02.149 15:59:32 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:09:02.149 15:59:32 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:02.149 15:59:32 -- common/autotest_common.sh@914 -- # local i=0 00:09:02.149 15:59:32 -- common/autotest_common.sh@915 -- # local force 00:09:02.149 15:59:32 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:09:02.149 15:59:32 -- common/autotest_common.sh@920 -- # force=-f 00:09:02.149 15:59:32 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:02.409 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:02.409 = sectsz=512 attr=2, projid32bit=1 00:09:02.409 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:02.409 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:02.409 data = bsize=4096 blocks=130560, imaxpct=25 00:09:02.409 = sunit=0 swidth=0 blks 00:09:02.409 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:02.409 log =internal log bsize=4096 blocks=16384, version=2 00:09:02.409 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:02.409 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:02.409 Discarding blocks...Done. 00:09:02.409 15:59:33 -- common/autotest_common.sh@931 -- # return 0 00:09:02.409 15:59:33 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:02.409 15:59:33 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:02.409 15:59:33 -- target/filesystem.sh@25 -- # sync 00:09:02.409 15:59:33 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:02.409 15:59:33 -- target/filesystem.sh@27 -- # sync 00:09:02.409 15:59:33 -- target/filesystem.sh@29 -- # i=0 00:09:02.409 15:59:33 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:02.409 15:59:33 -- target/filesystem.sh@37 -- # kill -0 1215012 00:09:02.409 15:59:33 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:02.409 15:59:33 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:02.409 15:59:33 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:02.409 15:59:33 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:02.409 00:09:02.409 real 0m0.204s 00:09:02.409 user 0m0.035s 00:09:02.409 sys 0m0.074s 00:09:02.409 15:59:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:02.409 15:59:33 -- common/autotest_common.sh@10 -- # set +x 00:09:02.409 ************************************ 00:09:02.409 END TEST filesystem_xfs 00:09:02.409 ************************************ 00:09:02.409 15:59:33 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:02.409 15:59:33 -- target/filesystem.sh@93 -- # sync 00:09:02.409 15:59:33 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:03.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.788 15:59:34 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:03.788 15:59:34 -- common/autotest_common.sh@1208 -- # local i=0 00:09:03.788 15:59:34 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:03.788 15:59:34 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.788 15:59:34 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:03.788 15:59:34 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:03.788 15:59:34 -- common/autotest_common.sh@1220 -- # return 0 00:09:03.788 15:59:34 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:03.788 15:59:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:03.788 15:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:03.788 15:59:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:03.788 15:59:34 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:03.788 15:59:34 -- target/filesystem.sh@101 -- # killprocess 1215012 00:09:03.788 15:59:34 -- common/autotest_common.sh@936 -- # '[' -z 1215012 ']' 00:09:03.788 15:59:34 -- common/autotest_common.sh@940 -- # kill -0 1215012 00:09:03.788 15:59:34 -- common/autotest_common.sh@941 -- # uname 00:09:03.788 15:59:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:03.788 15:59:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1215012 00:09:03.788 15:59:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:03.788 15:59:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:03.788 15:59:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1215012' 00:09:03.788 killing process with pid 1215012 00:09:03.788 15:59:34 -- common/autotest_common.sh@955 -- # kill 1215012 00:09:03.788 15:59:34 -- common/autotest_common.sh@960 -- # wait 1215012 00:09:04.048 15:59:34 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:04.048 00:09:04.048 real 0m7.809s 00:09:04.048 user 0m30.530s 00:09:04.048 sys 0m1.193s 00:09:04.048 15:59:34 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:04.048 15:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.048 ************************************ 00:09:04.048 END TEST nvmf_filesystem_no_in_capsule 00:09:04.048 ************************************ 00:09:04.048 15:59:34 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:04.048 15:59:34 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:04.048 15:59:34 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:04.048 15:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.048 ************************************ 00:09:04.048 START TEST nvmf_filesystem_in_capsule 00:09:04.048 ************************************ 00:09:04.048 15:59:34 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_part 4096 00:09:04.048 15:59:34 -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:04.048 15:59:34 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:04.048 15:59:34 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:04.048 15:59:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:04.048 15:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.048 15:59:34 -- nvmf/common.sh@469 -- # nvmfpid=1216581 00:09:04.048 15:59:34 -- nvmf/common.sh@470 -- # waitforlisten 1216581 00:09:04.048 15:59:34 -- common/autotest_common.sh@829 -- # '[' -z 1216581 ']' 00:09:04.048 15:59:34 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.048 15:59:34 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:04.048 15:59:34 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.048 15:59:34 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:04.048 15:59:34 -- common/autotest_common.sh@10 -- # set +x 00:09:04.048 15:59:34 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:04.048 [2024-11-20 15:59:34.755090] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:04.048 [2024-11-20 15:59:34.755141] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:04.048 EAL: No free 2048 kB hugepages reported on node 1 00:09:04.048 [2024-11-20 15:59:34.825427] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:04.307 [2024-11-20 15:59:34.863184] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:04.307 [2024-11-20 15:59:34.863291] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:04.307 [2024-11-20 15:59:34.863300] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:04.307 [2024-11-20 15:59:34.863309] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:04.307 [2024-11-20 15:59:34.863363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.307 [2024-11-20 15:59:34.863441] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.307 [2024-11-20 15:59:34.863533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:04.307 [2024-11-20 15:59:34.863534] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.875 15:59:35 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:04.875 15:59:35 -- common/autotest_common.sh@862 -- # return 0 00:09:04.875 15:59:35 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:04.875 15:59:35 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:04.875 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:09:04.875 15:59:35 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:04.875 15:59:35 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:04.875 15:59:35 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 4096 00:09:04.875 15:59:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.875 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:09:04.875 [2024-11-20 15:59:35.658766] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x83af30/0x83f400) succeed. 00:09:04.875 [2024-11-20 15:59:35.668197] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x83c4d0/0x880aa0) succeed. 00:09:05.134 15:59:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.134 15:59:35 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:05.134 15:59:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.134 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.134 Malloc1 00:09:05.134 15:59:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.134 15:59:35 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:05.134 15:59:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.134 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.134 15:59:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.134 15:59:35 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:05.134 15:59:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.134 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.134 15:59:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.134 15:59:35 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:05.134 15:59:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.134 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.134 [2024-11-20 15:59:35.934754] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:05.394 15:59:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.394 15:59:35 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:05.394 15:59:35 -- common/autotest_common.sh@1367 -- # local bdev_name=Malloc1 00:09:05.394 15:59:35 -- common/autotest_common.sh@1368 -- # local bdev_info 00:09:05.394 15:59:35 -- common/autotest_common.sh@1369 -- # local bs 00:09:05.394 15:59:35 -- common/autotest_common.sh@1370 -- # local nb 00:09:05.394 15:59:35 -- common/autotest_common.sh@1371 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:05.394 15:59:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:05.394 15:59:35 -- common/autotest_common.sh@10 -- # set +x 00:09:05.394 15:59:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:05.394 15:59:35 -- common/autotest_common.sh@1371 -- # bdev_info='[ 00:09:05.394 { 00:09:05.394 "name": "Malloc1", 00:09:05.394 "aliases": [ 00:09:05.394 "ad75f418-e574-4f45-9aff-825ebce4a51e" 00:09:05.394 ], 00:09:05.394 "product_name": "Malloc disk", 00:09:05.394 "block_size": 512, 00:09:05.394 "num_blocks": 1048576, 00:09:05.394 "uuid": "ad75f418-e574-4f45-9aff-825ebce4a51e", 00:09:05.394 "assigned_rate_limits": { 00:09:05.394 "rw_ios_per_sec": 0, 00:09:05.394 "rw_mbytes_per_sec": 0, 00:09:05.394 "r_mbytes_per_sec": 0, 00:09:05.394 "w_mbytes_per_sec": 0 00:09:05.394 }, 00:09:05.394 "claimed": true, 00:09:05.394 "claim_type": "exclusive_write", 00:09:05.394 "zoned": false, 00:09:05.394 "supported_io_types": { 00:09:05.394 "read": true, 00:09:05.394 "write": true, 00:09:05.394 "unmap": true, 00:09:05.394 "write_zeroes": true, 00:09:05.394 "flush": true, 00:09:05.394 "reset": true, 00:09:05.394 "compare": false, 00:09:05.394 "compare_and_write": false, 00:09:05.394 "abort": true, 00:09:05.394 "nvme_admin": false, 00:09:05.394 "nvme_io": false 00:09:05.394 }, 00:09:05.394 "memory_domains": [ 00:09:05.394 { 00:09:05.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:05.394 "dma_device_type": 2 00:09:05.394 } 00:09:05.394 ], 00:09:05.394 "driver_specific": {} 00:09:05.394 } 00:09:05.394 ]' 00:09:05.394 15:59:35 -- common/autotest_common.sh@1372 -- # jq '.[] .block_size' 00:09:05.394 15:59:36 -- common/autotest_common.sh@1372 -- # bs=512 00:09:05.394 15:59:36 -- common/autotest_common.sh@1373 -- # jq '.[] .num_blocks' 00:09:05.394 15:59:36 -- common/autotest_common.sh@1373 -- # nb=1048576 00:09:05.394 15:59:36 -- common/autotest_common.sh@1376 -- # bdev_size=512 00:09:05.394 15:59:36 -- common/autotest_common.sh@1377 -- # echo 512 00:09:05.394 15:59:36 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:05.394 15:59:36 -- target/filesystem.sh@60 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:09:06.339 15:59:37 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:06.339 15:59:37 -- common/autotest_common.sh@1187 -- # local i=0 00:09:06.339 15:59:37 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.339 15:59:37 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:09:06.339 15:59:37 -- common/autotest_common.sh@1194 -- # sleep 2 00:09:08.246 15:59:39 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:09:08.246 15:59:39 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:09:08.246 15:59:39 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.505 15:59:39 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:09:08.505 15:59:39 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.505 15:59:39 -- common/autotest_common.sh@1197 -- # return 0 00:09:08.505 15:59:39 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:08.505 15:59:39 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:08.505 15:59:39 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:08.505 15:59:39 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:08.505 15:59:39 -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:08.505 15:59:39 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:08.505 15:59:39 -- setup/common.sh@80 -- # echo 536870912 00:09:08.505 15:59:39 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:08.505 15:59:39 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:08.505 15:59:39 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:08.505 15:59:39 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:08.505 15:59:39 -- target/filesystem.sh@69 -- # partprobe 00:09:08.505 15:59:39 -- target/filesystem.sh@70 -- # sleep 1 00:09:09.883 15:59:40 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:09:09.883 15:59:40 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:09.883 15:59:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:09.883 15:59:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.883 15:59:40 -- common/autotest_common.sh@10 -- # set +x 00:09:09.883 ************************************ 00:09:09.883 START TEST filesystem_in_capsule_ext4 00:09:09.883 ************************************ 00:09:09.883 15:59:40 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:09.883 15:59:40 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:09.883 15:59:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:09.883 15:59:40 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:09.883 15:59:40 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:09:09.883 15:59:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:09.883 15:59:40 -- common/autotest_common.sh@914 -- # local i=0 00:09:09.883 15:59:40 -- common/autotest_common.sh@915 -- # local force 00:09:09.883 15:59:40 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:09:09.883 15:59:40 -- common/autotest_common.sh@918 -- # force=-F 00:09:09.883 15:59:40 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:09.884 mke2fs 1.47.0 (5-Feb-2023) 00:09:09.884 Discarding device blocks: 0/522240 done 00:09:09.884 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:09.884 Filesystem UUID: 054431f7-1107-41e7-aaca-6920938d6128 00:09:09.884 Superblock backups stored on blocks: 00:09:09.884 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:09.884 00:09:09.884 Allocating group tables: 0/64 done 00:09:09.884 Writing inode tables: 0/64 done 00:09:09.884 Creating journal (8192 blocks): done 00:09:09.884 Writing superblocks and filesystem accounting information: 0/64 done 00:09:09.884 00:09:09.884 15:59:40 -- common/autotest_common.sh@931 -- # return 0 00:09:09.884 15:59:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:09.884 15:59:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:09.884 15:59:40 -- target/filesystem.sh@25 -- # sync 00:09:09.884 15:59:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:09.884 15:59:40 -- target/filesystem.sh@27 -- # sync 00:09:09.884 15:59:40 -- target/filesystem.sh@29 -- # i=0 00:09:09.884 15:59:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:09.884 15:59:40 -- target/filesystem.sh@37 -- # kill -0 1216581 00:09:09.884 15:59:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:09.884 15:59:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:09.884 15:59:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:09.884 15:59:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:09.884 00:09:09.884 real 0m0.201s 00:09:09.884 user 0m0.034s 00:09:09.884 sys 0m0.074s 00:09:09.884 15:59:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:09.884 15:59:40 -- common/autotest_common.sh@10 -- # set +x 00:09:09.884 ************************************ 00:09:09.884 END TEST filesystem_in_capsule_ext4 00:09:09.884 ************************************ 00:09:09.884 15:59:40 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:09.884 15:59:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:09.884 15:59:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:09.884 15:59:40 -- common/autotest_common.sh@10 -- # set +x 00:09:09.884 ************************************ 00:09:09.884 START TEST filesystem_in_capsule_btrfs 00:09:09.884 ************************************ 00:09:09.884 15:59:40 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:09.884 15:59:40 -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:09.884 15:59:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:09.884 15:59:40 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:09.884 15:59:40 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:09:09.884 15:59:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:09.884 15:59:40 -- common/autotest_common.sh@914 -- # local i=0 00:09:09.884 15:59:40 -- common/autotest_common.sh@915 -- # local force 00:09:09.884 15:59:40 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:09:09.884 15:59:40 -- common/autotest_common.sh@920 -- # force=-f 00:09:09.884 15:59:40 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:09.884 btrfs-progs v6.8.1 00:09:09.884 See https://btrfs.readthedocs.io for more information. 00:09:09.884 00:09:09.884 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:09.884 NOTE: several default settings have changed in version 5.15, please make sure 00:09:09.884 this does not affect your deployments: 00:09:09.884 - DUP for metadata (-m dup) 00:09:09.884 - enabled no-holes (-O no-holes) 00:09:09.884 - enabled free-space-tree (-R free-space-tree) 00:09:09.884 00:09:09.884 Label: (null) 00:09:09.884 UUID: 84757cd7-d3dc-4d0a-b2bc-c03f218a3f08 00:09:09.884 Node size: 16384 00:09:09.884 Sector size: 4096 (CPU page size: 4096) 00:09:09.884 Filesystem size: 510.00MiB 00:09:09.884 Block group profiles: 00:09:09.884 Data: single 8.00MiB 00:09:09.884 Metadata: DUP 32.00MiB 00:09:09.884 System: DUP 8.00MiB 00:09:09.884 SSD detected: yes 00:09:09.884 Zoned device: no 00:09:09.884 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:09.884 Checksum: crc32c 00:09:09.884 Number of devices: 1 00:09:09.884 Devices: 00:09:09.884 ID SIZE PATH 00:09:09.884 1 510.00MiB /dev/nvme0n1p1 00:09:09.884 00:09:09.884 15:59:40 -- common/autotest_common.sh@931 -- # return 0 00:09:09.884 15:59:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:10.143 15:59:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:10.143 15:59:40 -- target/filesystem.sh@25 -- # sync 00:09:10.143 15:59:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:10.143 15:59:40 -- target/filesystem.sh@27 -- # sync 00:09:10.143 15:59:40 -- target/filesystem.sh@29 -- # i=0 00:09:10.143 15:59:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:10.143 15:59:40 -- target/filesystem.sh@37 -- # kill -0 1216581 00:09:10.143 15:59:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:10.143 15:59:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:10.143 15:59:40 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:10.144 15:59:40 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:10.144 00:09:10.144 real 0m0.251s 00:09:10.144 user 0m0.031s 00:09:10.144 sys 0m0.131s 00:09:10.144 15:59:40 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.144 15:59:40 -- common/autotest_common.sh@10 -- # set +x 00:09:10.144 ************************************ 00:09:10.144 END TEST filesystem_in_capsule_btrfs 00:09:10.144 ************************************ 00:09:10.144 15:59:40 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:09:10.144 15:59:40 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:09:10.144 15:59:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:10.144 15:59:40 -- common/autotest_common.sh@10 -- # set +x 00:09:10.144 ************************************ 00:09:10.144 START TEST filesystem_in_capsule_xfs 00:09:10.144 ************************************ 00:09:10.144 15:59:40 -- common/autotest_common.sh@1114 -- # nvmf_filesystem_create xfs nvme0n1 00:09:10.144 15:59:40 -- target/filesystem.sh@18 -- # fstype=xfs 00:09:10.144 15:59:40 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:10.144 15:59:40 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:10.144 15:59:40 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:09:10.144 15:59:40 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:09:10.144 15:59:40 -- common/autotest_common.sh@914 -- # local i=0 00:09:10.144 15:59:40 -- common/autotest_common.sh@915 -- # local force 00:09:10.144 15:59:40 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:09:10.144 15:59:40 -- common/autotest_common.sh@920 -- # force=-f 00:09:10.144 15:59:40 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:10.144 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:10.144 = sectsz=512 attr=2, projid32bit=1 00:09:10.144 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:10.144 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:10.144 data = bsize=4096 blocks=130560, imaxpct=25 00:09:10.144 = sunit=0 swidth=0 blks 00:09:10.144 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:10.144 log =internal log bsize=4096 blocks=16384, version=2 00:09:10.144 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:10.144 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:10.144 Discarding blocks...Done. 00:09:10.144 15:59:40 -- common/autotest_common.sh@931 -- # return 0 00:09:10.144 15:59:40 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:10.403 15:59:40 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:10.403 15:59:40 -- target/filesystem.sh@25 -- # sync 00:09:10.403 15:59:40 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:10.403 15:59:40 -- target/filesystem.sh@27 -- # sync 00:09:10.403 15:59:40 -- target/filesystem.sh@29 -- # i=0 00:09:10.403 15:59:40 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:10.403 15:59:40 -- target/filesystem.sh@37 -- # kill -0 1216581 00:09:10.403 15:59:40 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:10.403 15:59:40 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:10.403 15:59:41 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:10.403 15:59:41 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:10.403 00:09:10.403 real 0m0.204s 00:09:10.403 user 0m0.028s 00:09:10.403 sys 0m0.078s 00:09:10.403 15:59:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:10.403 15:59:41 -- common/autotest_common.sh@10 -- # set +x 00:09:10.403 ************************************ 00:09:10.403 END TEST filesystem_in_capsule_xfs 00:09:10.403 ************************************ 00:09:10.403 15:59:41 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:10.403 15:59:41 -- target/filesystem.sh@93 -- # sync 00:09:10.403 15:59:41 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:11.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:11.342 15:59:42 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:11.342 15:59:42 -- common/autotest_common.sh@1208 -- # local i=0 00:09:11.342 15:59:42 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:09:11.342 15:59:42 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.342 15:59:42 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:09:11.342 15:59:42 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:11.342 15:59:42 -- common/autotest_common.sh@1220 -- # return 0 00:09:11.342 15:59:42 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:11.342 15:59:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.342 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:09:11.342 15:59:42 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.342 15:59:42 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:11.342 15:59:42 -- target/filesystem.sh@101 -- # killprocess 1216581 00:09:11.342 15:59:42 -- common/autotest_common.sh@936 -- # '[' -z 1216581 ']' 00:09:11.342 15:59:42 -- common/autotest_common.sh@940 -- # kill -0 1216581 00:09:11.342 15:59:42 -- common/autotest_common.sh@941 -- # uname 00:09:11.342 15:59:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:11.342 15:59:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1216581 00:09:11.342 15:59:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:11.342 15:59:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:11.342 15:59:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1216581' 00:09:11.342 killing process with pid 1216581 00:09:11.342 15:59:42 -- common/autotest_common.sh@955 -- # kill 1216581 00:09:11.342 15:59:42 -- common/autotest_common.sh@960 -- # wait 1216581 00:09:11.911 15:59:42 -- target/filesystem.sh@102 -- # nvmfpid= 00:09:11.911 00:09:11.911 real 0m7.835s 00:09:11.911 user 0m30.643s 00:09:11.911 sys 0m1.201s 00:09:11.911 15:59:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.911 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:09:11.911 ************************************ 00:09:11.911 END TEST nvmf_filesystem_in_capsule 00:09:11.911 ************************************ 00:09:11.911 15:59:42 -- target/filesystem.sh@108 -- # nvmftestfini 00:09:11.911 15:59:42 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:11.911 15:59:42 -- nvmf/common.sh@116 -- # sync 00:09:11.911 15:59:42 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:11.911 15:59:42 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:11.911 15:59:42 -- nvmf/common.sh@119 -- # set +e 00:09:11.911 15:59:42 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:11.911 15:59:42 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:11.911 rmmod nvme_rdma 00:09:11.911 rmmod nvme_fabrics 00:09:11.911 15:59:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:11.911 15:59:42 -- nvmf/common.sh@123 -- # set -e 00:09:11.911 15:59:42 -- nvmf/common.sh@124 -- # return 0 00:09:11.911 15:59:42 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:09:11.911 15:59:42 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:11.911 15:59:42 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:11.911 00:09:11.911 real 0m23.094s 00:09:11.911 user 1m3.432s 00:09:11.911 sys 0m7.846s 00:09:11.911 15:59:42 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:11.911 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:09:11.911 ************************************ 00:09:11.911 END TEST nvmf_filesystem 00:09:11.911 ************************************ 00:09:11.911 15:59:42 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:11.911 15:59:42 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:11.911 15:59:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:11.911 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:09:11.911 ************************************ 00:09:11.911 START TEST nvmf_discovery 00:09:11.911 ************************************ 00:09:11.911 15:59:42 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=rdma 00:09:12.175 * Looking for test storage... 00:09:12.175 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:12.175 15:59:42 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:12.175 15:59:42 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:12.175 15:59:42 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:12.175 15:59:42 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:12.175 15:59:42 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:12.175 15:59:42 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:12.175 15:59:42 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:12.175 15:59:42 -- scripts/common.sh@335 -- # IFS=.-: 00:09:12.175 15:59:42 -- scripts/common.sh@335 -- # read -ra ver1 00:09:12.175 15:59:42 -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.175 15:59:42 -- scripts/common.sh@336 -- # read -ra ver2 00:09:12.175 15:59:42 -- scripts/common.sh@337 -- # local 'op=<' 00:09:12.175 15:59:42 -- scripts/common.sh@339 -- # ver1_l=2 00:09:12.175 15:59:42 -- scripts/common.sh@340 -- # ver2_l=1 00:09:12.175 15:59:42 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:12.175 15:59:42 -- scripts/common.sh@343 -- # case "$op" in 00:09:12.175 15:59:42 -- scripts/common.sh@344 -- # : 1 00:09:12.175 15:59:42 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:12.175 15:59:42 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.175 15:59:42 -- scripts/common.sh@364 -- # decimal 1 00:09:12.175 15:59:42 -- scripts/common.sh@352 -- # local d=1 00:09:12.175 15:59:42 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.175 15:59:42 -- scripts/common.sh@354 -- # echo 1 00:09:12.175 15:59:42 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:12.175 15:59:42 -- scripts/common.sh@365 -- # decimal 2 00:09:12.175 15:59:42 -- scripts/common.sh@352 -- # local d=2 00:09:12.175 15:59:42 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.175 15:59:42 -- scripts/common.sh@354 -- # echo 2 00:09:12.175 15:59:42 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:12.175 15:59:42 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:12.175 15:59:42 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:12.175 15:59:42 -- scripts/common.sh@367 -- # return 0 00:09:12.175 15:59:42 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.175 15:59:42 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:12.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.175 --rc genhtml_branch_coverage=1 00:09:12.175 --rc genhtml_function_coverage=1 00:09:12.175 --rc genhtml_legend=1 00:09:12.175 --rc geninfo_all_blocks=1 00:09:12.175 --rc geninfo_unexecuted_blocks=1 00:09:12.175 00:09:12.175 ' 00:09:12.175 15:59:42 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:12.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.175 --rc genhtml_branch_coverage=1 00:09:12.175 --rc genhtml_function_coverage=1 00:09:12.175 --rc genhtml_legend=1 00:09:12.175 --rc geninfo_all_blocks=1 00:09:12.175 --rc geninfo_unexecuted_blocks=1 00:09:12.175 00:09:12.175 ' 00:09:12.175 15:59:42 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:12.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.175 --rc genhtml_branch_coverage=1 00:09:12.175 --rc genhtml_function_coverage=1 00:09:12.175 --rc genhtml_legend=1 00:09:12.175 --rc geninfo_all_blocks=1 00:09:12.175 --rc geninfo_unexecuted_blocks=1 00:09:12.175 00:09:12.175 ' 00:09:12.175 15:59:42 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:12.175 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.175 --rc genhtml_branch_coverage=1 00:09:12.175 --rc genhtml_function_coverage=1 00:09:12.175 --rc genhtml_legend=1 00:09:12.175 --rc geninfo_all_blocks=1 00:09:12.175 --rc geninfo_unexecuted_blocks=1 00:09:12.175 00:09:12.175 ' 00:09:12.175 15:59:42 -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:12.175 15:59:42 -- nvmf/common.sh@7 -- # uname -s 00:09:12.175 15:59:42 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:12.175 15:59:42 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:12.175 15:59:42 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:12.175 15:59:42 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:12.175 15:59:42 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:12.175 15:59:42 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:12.175 15:59:42 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:12.175 15:59:42 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:12.175 15:59:42 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:12.175 15:59:42 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:12.175 15:59:42 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:12.175 15:59:42 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:12.175 15:59:42 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:12.175 15:59:42 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:12.175 15:59:42 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:12.175 15:59:42 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:12.175 15:59:42 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:12.175 15:59:42 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:12.175 15:59:42 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:12.175 15:59:42 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.175 15:59:42 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.175 15:59:42 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.175 15:59:42 -- paths/export.sh@5 -- # export PATH 00:09:12.175 15:59:42 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:12.175 15:59:42 -- nvmf/common.sh@46 -- # : 0 00:09:12.175 15:59:42 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:12.175 15:59:42 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:12.175 15:59:42 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:12.175 15:59:42 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:12.176 15:59:42 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:12.176 15:59:42 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:12.176 15:59:42 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:12.176 15:59:42 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:12.176 15:59:42 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:09:12.176 15:59:42 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:09:12.176 15:59:42 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:09:12.176 15:59:42 -- target/discovery.sh@15 -- # hash nvme 00:09:12.176 15:59:42 -- target/discovery.sh@20 -- # nvmftestinit 00:09:12.176 15:59:42 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:12.176 15:59:42 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:12.176 15:59:42 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:12.176 15:59:42 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:12.176 15:59:42 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:12.176 15:59:42 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:12.176 15:59:42 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:12.176 15:59:42 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:12.176 15:59:42 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:12.176 15:59:42 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:12.176 15:59:42 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:12.176 15:59:42 -- common/autotest_common.sh@10 -- # set +x 00:09:18.750 15:59:49 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:18.750 15:59:49 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:18.750 15:59:49 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:18.750 15:59:49 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:18.750 15:59:49 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:18.750 15:59:49 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:18.750 15:59:49 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:18.750 15:59:49 -- nvmf/common.sh@294 -- # net_devs=() 00:09:18.750 15:59:49 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:18.750 15:59:49 -- nvmf/common.sh@295 -- # e810=() 00:09:18.750 15:59:49 -- nvmf/common.sh@295 -- # local -ga e810 00:09:18.750 15:59:49 -- nvmf/common.sh@296 -- # x722=() 00:09:18.750 15:59:49 -- nvmf/common.sh@296 -- # local -ga x722 00:09:18.750 15:59:49 -- nvmf/common.sh@297 -- # mlx=() 00:09:18.750 15:59:49 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:18.750 15:59:49 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.750 15:59:49 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.751 15:59:49 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:18.751 15:59:49 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:18.751 15:59:49 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:18.751 15:59:49 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:18.751 15:59:49 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:18.751 15:59:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:18.751 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:18.751 15:59:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.751 15:59:49 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:18.751 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:18.751 15:59:49 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:18.751 15:59:49 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:18.751 15:59:49 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.751 15:59:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:18.751 15:59:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.751 15:59:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:18.751 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:18.751 15:59:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.751 15:59:49 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.751 15:59:49 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:18.751 15:59:49 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.751 15:59:49 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:18.751 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:18.751 15:59:49 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.751 15:59:49 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:18.751 15:59:49 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:18.751 15:59:49 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:18.751 15:59:49 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:18.751 15:59:49 -- nvmf/common.sh@57 -- # uname 00:09:18.751 15:59:49 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:18.751 15:59:49 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:18.751 15:59:49 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:18.751 15:59:49 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:18.751 15:59:49 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:18.751 15:59:49 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:18.751 15:59:49 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:18.751 15:59:49 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:18.751 15:59:49 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:18.751 15:59:49 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:18.751 15:59:49 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:18.751 15:59:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.751 15:59:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:18.751 15:59:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:18.751 15:59:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.751 15:59:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:18.751 15:59:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:18.751 15:59:49 -- nvmf/common.sh@104 -- # continue 2 00:09:18.751 15:59:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:18.751 15:59:49 -- nvmf/common.sh@104 -- # continue 2 00:09:18.751 15:59:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:18.751 15:59:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:18.751 15:59:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:18.751 15:59:49 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:18.751 15:59:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:18.751 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.751 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:18.751 altname enp217s0f0np0 00:09:18.751 altname ens818f0np0 00:09:18.751 inet 192.168.100.8/24 scope global mlx_0_0 00:09:18.751 valid_lft forever preferred_lft forever 00:09:18.751 15:59:49 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:18.751 15:59:49 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:18.751 15:59:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:18.751 15:59:49 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:18.751 15:59:49 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:18.751 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:18.751 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:18.751 altname enp217s0f1np1 00:09:18.751 altname ens818f1np1 00:09:18.751 inet 192.168.100.9/24 scope global mlx_0_1 00:09:18.751 valid_lft forever preferred_lft forever 00:09:18.751 15:59:49 -- nvmf/common.sh@410 -- # return 0 00:09:18.751 15:59:49 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:18.751 15:59:49 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:18.751 15:59:49 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:18.751 15:59:49 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:18.751 15:59:49 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:18.751 15:59:49 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:18.751 15:59:49 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:18.751 15:59:49 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:18.751 15:59:49 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:18.751 15:59:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:18.751 15:59:49 -- nvmf/common.sh@104 -- # continue 2 00:09:18.751 15:59:49 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:18.751 15:59:49 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:18.751 15:59:49 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:18.751 15:59:49 -- nvmf/common.sh@104 -- # continue 2 00:09:18.751 15:59:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:18.751 15:59:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:18.751 15:59:49 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:18.751 15:59:49 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:18.751 15:59:49 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:18.751 15:59:49 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:18.751 15:59:49 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:18.751 15:59:49 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:18.751 192.168.100.9' 00:09:18.751 15:59:49 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:18.751 192.168.100.9' 00:09:18.751 15:59:49 -- nvmf/common.sh@445 -- # head -n 1 00:09:18.751 15:59:49 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:18.751 15:59:49 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:18.751 192.168.100.9' 00:09:18.751 15:59:49 -- nvmf/common.sh@446 -- # tail -n +2 00:09:18.751 15:59:49 -- nvmf/common.sh@446 -- # head -n 1 00:09:18.751 15:59:49 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:18.751 15:59:49 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:18.751 15:59:49 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:18.751 15:59:49 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:18.751 15:59:49 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:18.751 15:59:49 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:18.751 15:59:49 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:09:18.752 15:59:49 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:18.752 15:59:49 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.752 15:59:49 -- common/autotest_common.sh@10 -- # set +x 00:09:18.752 15:59:49 -- nvmf/common.sh@469 -- # nvmfpid=1221415 00:09:18.752 15:59:49 -- nvmf/common.sh@470 -- # waitforlisten 1221415 00:09:18.752 15:59:49 -- common/autotest_common.sh@829 -- # '[' -z 1221415 ']' 00:09:18.752 15:59:49 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.752 15:59:49 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.752 15:59:49 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.752 15:59:49 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.752 15:59:49 -- common/autotest_common.sh@10 -- # set +x 00:09:18.752 15:59:49 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.752 [2024-11-20 15:59:49.476950] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:18.752 [2024-11-20 15:59:49.476999] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.752 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.752 [2024-11-20 15:59:49.548580] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:19.011 [2024-11-20 15:59:49.586552] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:19.011 [2024-11-20 15:59:49.586677] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:19.011 [2024-11-20 15:59:49.586688] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:19.011 [2024-11-20 15:59:49.586697] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:19.011 [2024-11-20 15:59:49.586737] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.011 [2024-11-20 15:59:49.586843] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:19.011 [2024-11-20 15:59:49.586867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.011 [2024-11-20 15:59:49.586868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.579 15:59:50 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.579 15:59:50 -- common/autotest_common.sh@862 -- # return 0 00:09:19.579 15:59:50 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:19.579 15:59:50 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:19.579 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.579 15:59:50 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.579 15:59:50 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:19.579 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.579 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.579 [2024-11-20 15:59:50.360270] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xf830d0/0xf875a0) succeed. 00:09:19.579 [2024-11-20 15:59:50.369443] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xf84670/0xfc8c40) succeed. 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@26 -- # seq 1 4 00:09:19.839 15:59:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:19.839 15:59:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 Null1 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 [2024-11-20 15:59:50.536769] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:19.839 15:59:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 Null2 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:19.839 15:59:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 Null3 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:09:19.839 15:59:50 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 Null4 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:19.839 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.839 15:59:50 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:09:19.839 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.839 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.099 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.099 15:59:50 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 192.168.100.8 -s 4430 00:09:20.099 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.099 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.099 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.099 15:59:50 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:09:20.099 00:09:20.099 Discovery Log Number of Records 6, Generation counter 6 00:09:20.099 =====Discovery Log Entry 0====== 00:09:20.099 trtype: rdma 00:09:20.099 adrfam: ipv4 00:09:20.099 subtype: current discovery subsystem 00:09:20.099 treq: not required 00:09:20.099 portid: 0 00:09:20.099 trsvcid: 4420 00:09:20.099 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:20.099 traddr: 192.168.100.8 00:09:20.099 eflags: explicit discovery connections, duplicate discovery information 00:09:20.099 rdma_prtype: not specified 00:09:20.099 rdma_qptype: connected 00:09:20.099 rdma_cms: rdma-cm 00:09:20.099 rdma_pkey: 0x0000 00:09:20.099 =====Discovery Log Entry 1====== 00:09:20.099 trtype: rdma 00:09:20.099 adrfam: ipv4 00:09:20.099 subtype: nvme subsystem 00:09:20.099 treq: not required 00:09:20.099 portid: 0 00:09:20.099 trsvcid: 4420 00:09:20.099 subnqn: nqn.2016-06.io.spdk:cnode1 00:09:20.099 traddr: 192.168.100.8 00:09:20.099 eflags: none 00:09:20.099 rdma_prtype: not specified 00:09:20.099 rdma_qptype: connected 00:09:20.099 rdma_cms: rdma-cm 00:09:20.099 rdma_pkey: 0x0000 00:09:20.100 =====Discovery Log Entry 2====== 00:09:20.100 trtype: rdma 00:09:20.100 adrfam: ipv4 00:09:20.100 subtype: nvme subsystem 00:09:20.100 treq: not required 00:09:20.100 portid: 0 00:09:20.100 trsvcid: 4420 00:09:20.100 subnqn: nqn.2016-06.io.spdk:cnode2 00:09:20.100 traddr: 192.168.100.8 00:09:20.100 eflags: none 00:09:20.100 rdma_prtype: not specified 00:09:20.100 rdma_qptype: connected 00:09:20.100 rdma_cms: rdma-cm 00:09:20.100 rdma_pkey: 0x0000 00:09:20.100 =====Discovery Log Entry 3====== 00:09:20.100 trtype: rdma 00:09:20.100 adrfam: ipv4 00:09:20.100 subtype: nvme subsystem 00:09:20.100 treq: not required 00:09:20.100 portid: 0 00:09:20.100 trsvcid: 4420 00:09:20.100 subnqn: nqn.2016-06.io.spdk:cnode3 00:09:20.100 traddr: 192.168.100.8 00:09:20.100 eflags: none 00:09:20.100 rdma_prtype: not specified 00:09:20.100 rdma_qptype: connected 00:09:20.100 rdma_cms: rdma-cm 00:09:20.100 rdma_pkey: 0x0000 00:09:20.100 =====Discovery Log Entry 4====== 00:09:20.100 trtype: rdma 00:09:20.100 adrfam: ipv4 00:09:20.100 subtype: nvme subsystem 00:09:20.100 treq: not required 00:09:20.100 portid: 0 00:09:20.100 trsvcid: 4420 00:09:20.100 subnqn: nqn.2016-06.io.spdk:cnode4 00:09:20.100 traddr: 192.168.100.8 00:09:20.100 eflags: none 00:09:20.100 rdma_prtype: not specified 00:09:20.100 rdma_qptype: connected 00:09:20.100 rdma_cms: rdma-cm 00:09:20.100 rdma_pkey: 0x0000 00:09:20.100 =====Discovery Log Entry 5====== 00:09:20.100 trtype: rdma 00:09:20.100 adrfam: ipv4 00:09:20.100 subtype: discovery subsystem referral 00:09:20.100 treq: not required 00:09:20.100 portid: 0 00:09:20.100 trsvcid: 4430 00:09:20.100 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:09:20.100 traddr: 192.168.100.8 00:09:20.100 eflags: none 00:09:20.100 rdma_prtype: unrecognized 00:09:20.100 rdma_qptype: unrecognized 00:09:20.100 rdma_cms: unrecognized 00:09:20.100 rdma_pkey: 0x0000 00:09:20.100 15:59:50 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:09:20.100 Perform nvmf subsystem discovery via RPC 00:09:20.100 15:59:50 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:09:20.100 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.100 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 [2024-11-20 15:59:50.765212] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:09:20.100 [ 00:09:20.100 { 00:09:20.100 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:09:20.100 "subtype": "Discovery", 00:09:20.100 "listen_addresses": [ 00:09:20.100 { 00:09:20.100 "transport": "RDMA", 00:09:20.100 "trtype": "RDMA", 00:09:20.100 "adrfam": "IPv4", 00:09:20.100 "traddr": "192.168.100.8", 00:09:20.100 "trsvcid": "4420" 00:09:20.100 } 00:09:20.100 ], 00:09:20.100 "allow_any_host": true, 00:09:20.100 "hosts": [] 00:09:20.100 }, 00:09:20.100 { 00:09:20.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:09:20.100 "subtype": "NVMe", 00:09:20.100 "listen_addresses": [ 00:09:20.100 { 00:09:20.100 "transport": "RDMA", 00:09:20.100 "trtype": "RDMA", 00:09:20.100 "adrfam": "IPv4", 00:09:20.100 "traddr": "192.168.100.8", 00:09:20.100 "trsvcid": "4420" 00:09:20.100 } 00:09:20.100 ], 00:09:20.100 "allow_any_host": true, 00:09:20.100 "hosts": [], 00:09:20.100 "serial_number": "SPDK00000000000001", 00:09:20.100 "model_number": "SPDK bdev Controller", 00:09:20.100 "max_namespaces": 32, 00:09:20.100 "min_cntlid": 1, 00:09:20.100 "max_cntlid": 65519, 00:09:20.100 "namespaces": [ 00:09:20.100 { 00:09:20.100 "nsid": 1, 00:09:20.100 "bdev_name": "Null1", 00:09:20.100 "name": "Null1", 00:09:20.100 "nguid": "042EB05DB69C42E0998E30387246B0B6", 00:09:20.100 "uuid": "042eb05d-b69c-42e0-998e-30387246b0b6" 00:09:20.100 } 00:09:20.100 ] 00:09:20.100 }, 00:09:20.100 { 00:09:20.100 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:09:20.100 "subtype": "NVMe", 00:09:20.100 "listen_addresses": [ 00:09:20.100 { 00:09:20.100 "transport": "RDMA", 00:09:20.100 "trtype": "RDMA", 00:09:20.100 "adrfam": "IPv4", 00:09:20.100 "traddr": "192.168.100.8", 00:09:20.100 "trsvcid": "4420" 00:09:20.100 } 00:09:20.100 ], 00:09:20.100 "allow_any_host": true, 00:09:20.100 "hosts": [], 00:09:20.100 "serial_number": "SPDK00000000000002", 00:09:20.100 "model_number": "SPDK bdev Controller", 00:09:20.100 "max_namespaces": 32, 00:09:20.100 "min_cntlid": 1, 00:09:20.100 "max_cntlid": 65519, 00:09:20.100 "namespaces": [ 00:09:20.100 { 00:09:20.100 "nsid": 1, 00:09:20.100 "bdev_name": "Null2", 00:09:20.100 "name": "Null2", 00:09:20.100 "nguid": "F05F9F064ED44ED5BDB092F399FF1B7B", 00:09:20.100 "uuid": "f05f9f06-4ed4-4ed5-bdb0-92f399ff1b7b" 00:09:20.100 } 00:09:20.100 ] 00:09:20.100 }, 00:09:20.100 { 00:09:20.100 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:09:20.100 "subtype": "NVMe", 00:09:20.100 "listen_addresses": [ 00:09:20.100 { 00:09:20.100 "transport": "RDMA", 00:09:20.100 "trtype": "RDMA", 00:09:20.100 "adrfam": "IPv4", 00:09:20.100 "traddr": "192.168.100.8", 00:09:20.100 "trsvcid": "4420" 00:09:20.100 } 00:09:20.100 ], 00:09:20.100 "allow_any_host": true, 00:09:20.100 "hosts": [], 00:09:20.100 "serial_number": "SPDK00000000000003", 00:09:20.100 "model_number": "SPDK bdev Controller", 00:09:20.100 "max_namespaces": 32, 00:09:20.100 "min_cntlid": 1, 00:09:20.100 "max_cntlid": 65519, 00:09:20.100 "namespaces": [ 00:09:20.100 { 00:09:20.100 "nsid": 1, 00:09:20.100 "bdev_name": "Null3", 00:09:20.100 "name": "Null3", 00:09:20.100 "nguid": "E6307718345C4542BF458502F3B8A36C", 00:09:20.100 "uuid": "e6307718-345c-4542-bf45-8502f3b8a36c" 00:09:20.100 } 00:09:20.100 ] 00:09:20.100 }, 00:09:20.100 { 00:09:20.100 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:09:20.100 "subtype": "NVMe", 00:09:20.100 "listen_addresses": [ 00:09:20.100 { 00:09:20.100 "transport": "RDMA", 00:09:20.100 "trtype": "RDMA", 00:09:20.100 "adrfam": "IPv4", 00:09:20.100 "traddr": "192.168.100.8", 00:09:20.100 "trsvcid": "4420" 00:09:20.100 } 00:09:20.100 ], 00:09:20.100 "allow_any_host": true, 00:09:20.100 "hosts": [], 00:09:20.100 "serial_number": "SPDK00000000000004", 00:09:20.100 "model_number": "SPDK bdev Controller", 00:09:20.100 "max_namespaces": 32, 00:09:20.100 "min_cntlid": 1, 00:09:20.100 "max_cntlid": 65519, 00:09:20.100 "namespaces": [ 00:09:20.100 { 00:09:20.100 "nsid": 1, 00:09:20.100 "bdev_name": "Null4", 00:09:20.100 "name": "Null4", 00:09:20.100 "nguid": "1CB56AA3097E4ED88E52433A5AC07379", 00:09:20.100 "uuid": "1cb56aa3-097e-4ed8-8e52-433a5ac07379" 00:09:20.100 } 00:09:20.100 ] 00:09:20.100 } 00:09:20.100 ] 00:09:20.100 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.100 15:59:50 -- target/discovery.sh@42 -- # seq 1 4 00:09:20.100 15:59:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:20.100 15:59:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:20.100 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.100 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.100 15:59:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:09:20.100 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.100 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.100 15:59:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:20.100 15:59:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:09:20.100 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.100 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.100 15:59:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:09:20.100 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.100 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.100 15:59:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:20.100 15:59:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:09:20.100 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.100 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.100 15:59:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:09:20.100 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.100 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.100 15:59:50 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:09:20.100 15:59:50 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:09:20.100 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.100 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.100 15:59:50 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:09:20.100 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.100 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.100 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.100 15:59:50 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 192.168.100.8 -s 4430 00:09:20.101 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.101 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.101 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.101 15:59:50 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:09:20.101 15:59:50 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:09:20.101 15:59:50 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.101 15:59:50 -- common/autotest_common.sh@10 -- # set +x 00:09:20.101 15:59:50 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.360 15:59:50 -- target/discovery.sh@49 -- # check_bdevs= 00:09:20.360 15:59:50 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:09:20.360 15:59:50 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:09:20.360 15:59:50 -- target/discovery.sh@57 -- # nvmftestfini 00:09:20.360 15:59:50 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:20.360 15:59:50 -- nvmf/common.sh@116 -- # sync 00:09:20.360 15:59:50 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:20.360 15:59:50 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:20.360 15:59:50 -- nvmf/common.sh@119 -- # set +e 00:09:20.360 15:59:50 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:20.360 15:59:50 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:20.360 rmmod nvme_rdma 00:09:20.360 rmmod nvme_fabrics 00:09:20.360 15:59:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:20.360 15:59:50 -- nvmf/common.sh@123 -- # set -e 00:09:20.360 15:59:50 -- nvmf/common.sh@124 -- # return 0 00:09:20.360 15:59:50 -- nvmf/common.sh@477 -- # '[' -n 1221415 ']' 00:09:20.360 15:59:50 -- nvmf/common.sh@478 -- # killprocess 1221415 00:09:20.360 15:59:50 -- common/autotest_common.sh@936 -- # '[' -z 1221415 ']' 00:09:20.360 15:59:50 -- common/autotest_common.sh@940 -- # kill -0 1221415 00:09:20.360 15:59:50 -- common/autotest_common.sh@941 -- # uname 00:09:20.360 15:59:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:20.360 15:59:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1221415 00:09:20.360 15:59:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:20.360 15:59:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:20.360 15:59:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1221415' 00:09:20.360 killing process with pid 1221415 00:09:20.360 15:59:51 -- common/autotest_common.sh@955 -- # kill 1221415 00:09:20.360 [2024-11-20 15:59:51.048528] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:09:20.360 15:59:51 -- common/autotest_common.sh@960 -- # wait 1221415 00:09:20.620 15:59:51 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:20.620 15:59:51 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:20.620 00:09:20.620 real 0m8.630s 00:09:20.620 user 0m8.742s 00:09:20.620 sys 0m5.493s 00:09:20.620 15:59:51 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:20.620 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.620 ************************************ 00:09:20.620 END TEST nvmf_discovery 00:09:20.620 ************************************ 00:09:20.620 15:59:51 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:20.620 15:59:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:20.620 15:59:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:20.620 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:09:20.620 ************************************ 00:09:20.620 START TEST nvmf_referrals 00:09:20.620 ************************************ 00:09:20.620 15:59:51 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=rdma 00:09:20.880 * Looking for test storage... 00:09:20.880 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:20.880 15:59:51 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:20.880 15:59:51 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:20.880 15:59:51 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:20.880 15:59:51 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:20.880 15:59:51 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:20.880 15:59:51 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:20.880 15:59:51 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:20.880 15:59:51 -- scripts/common.sh@335 -- # IFS=.-: 00:09:20.880 15:59:51 -- scripts/common.sh@335 -- # read -ra ver1 00:09:20.880 15:59:51 -- scripts/common.sh@336 -- # IFS=.-: 00:09:20.880 15:59:51 -- scripts/common.sh@336 -- # read -ra ver2 00:09:20.880 15:59:51 -- scripts/common.sh@337 -- # local 'op=<' 00:09:20.880 15:59:51 -- scripts/common.sh@339 -- # ver1_l=2 00:09:20.880 15:59:51 -- scripts/common.sh@340 -- # ver2_l=1 00:09:20.880 15:59:51 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:20.880 15:59:51 -- scripts/common.sh@343 -- # case "$op" in 00:09:20.880 15:59:51 -- scripts/common.sh@344 -- # : 1 00:09:20.880 15:59:51 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:20.880 15:59:51 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:20.880 15:59:51 -- scripts/common.sh@364 -- # decimal 1 00:09:20.880 15:59:51 -- scripts/common.sh@352 -- # local d=1 00:09:20.880 15:59:51 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:20.880 15:59:51 -- scripts/common.sh@354 -- # echo 1 00:09:20.880 15:59:51 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:20.880 15:59:51 -- scripts/common.sh@365 -- # decimal 2 00:09:20.880 15:59:51 -- scripts/common.sh@352 -- # local d=2 00:09:20.880 15:59:51 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:20.880 15:59:51 -- scripts/common.sh@354 -- # echo 2 00:09:20.880 15:59:51 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:20.880 15:59:51 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:20.880 15:59:51 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:20.880 15:59:51 -- scripts/common.sh@367 -- # return 0 00:09:20.880 15:59:51 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:20.880 15:59:51 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:20.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.880 --rc genhtml_branch_coverage=1 00:09:20.880 --rc genhtml_function_coverage=1 00:09:20.880 --rc genhtml_legend=1 00:09:20.880 --rc geninfo_all_blocks=1 00:09:20.880 --rc geninfo_unexecuted_blocks=1 00:09:20.880 00:09:20.880 ' 00:09:20.880 15:59:51 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:20.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.880 --rc genhtml_branch_coverage=1 00:09:20.880 --rc genhtml_function_coverage=1 00:09:20.880 --rc genhtml_legend=1 00:09:20.880 --rc geninfo_all_blocks=1 00:09:20.880 --rc geninfo_unexecuted_blocks=1 00:09:20.880 00:09:20.880 ' 00:09:20.880 15:59:51 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:20.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.880 --rc genhtml_branch_coverage=1 00:09:20.880 --rc genhtml_function_coverage=1 00:09:20.880 --rc genhtml_legend=1 00:09:20.880 --rc geninfo_all_blocks=1 00:09:20.880 --rc geninfo_unexecuted_blocks=1 00:09:20.880 00:09:20.880 ' 00:09:20.880 15:59:51 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:20.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:20.881 --rc genhtml_branch_coverage=1 00:09:20.881 --rc genhtml_function_coverage=1 00:09:20.881 --rc genhtml_legend=1 00:09:20.881 --rc geninfo_all_blocks=1 00:09:20.881 --rc geninfo_unexecuted_blocks=1 00:09:20.881 00:09:20.881 ' 00:09:20.881 15:59:51 -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:20.881 15:59:51 -- nvmf/common.sh@7 -- # uname -s 00:09:20.881 15:59:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:20.881 15:59:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:20.881 15:59:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:20.881 15:59:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:20.881 15:59:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:20.881 15:59:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:20.881 15:59:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:20.881 15:59:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:20.881 15:59:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:20.881 15:59:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:20.881 15:59:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:20.881 15:59:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:20.881 15:59:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:20.881 15:59:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:20.881 15:59:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:20.881 15:59:51 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:20.881 15:59:51 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:20.881 15:59:51 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:20.881 15:59:51 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:20.881 15:59:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.881 15:59:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.881 15:59:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.881 15:59:51 -- paths/export.sh@5 -- # export PATH 00:09:20.881 15:59:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:20.881 15:59:51 -- nvmf/common.sh@46 -- # : 0 00:09:20.881 15:59:51 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:20.881 15:59:51 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:20.881 15:59:51 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:20.881 15:59:51 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:20.881 15:59:51 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:20.881 15:59:51 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:20.881 15:59:51 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:20.881 15:59:51 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:20.881 15:59:51 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:09:20.881 15:59:51 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:09:20.881 15:59:51 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:09:20.881 15:59:51 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:09:20.881 15:59:51 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:09:20.881 15:59:51 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:09:20.881 15:59:51 -- target/referrals.sh@37 -- # nvmftestinit 00:09:20.881 15:59:51 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:20.881 15:59:51 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:20.881 15:59:51 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:20.881 15:59:51 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:20.881 15:59:51 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:20.881 15:59:51 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.881 15:59:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.881 15:59:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:20.881 15:59:51 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:20.881 15:59:51 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:20.881 15:59:51 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:20.881 15:59:51 -- common/autotest_common.sh@10 -- # set +x 00:09:27.457 15:59:57 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:27.457 15:59:57 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:27.457 15:59:57 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:27.457 15:59:57 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:27.457 15:59:57 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:27.457 15:59:57 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:27.457 15:59:57 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:27.457 15:59:57 -- nvmf/common.sh@294 -- # net_devs=() 00:09:27.457 15:59:57 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:27.457 15:59:57 -- nvmf/common.sh@295 -- # e810=() 00:09:27.457 15:59:57 -- nvmf/common.sh@295 -- # local -ga e810 00:09:27.457 15:59:57 -- nvmf/common.sh@296 -- # x722=() 00:09:27.457 15:59:57 -- nvmf/common.sh@296 -- # local -ga x722 00:09:27.457 15:59:57 -- nvmf/common.sh@297 -- # mlx=() 00:09:27.457 15:59:57 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:27.457 15:59:57 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:27.457 15:59:57 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:27.457 15:59:57 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:27.457 15:59:57 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:27.457 15:59:57 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:27.457 15:59:57 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:27.457 15:59:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:27.457 15:59:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:27.457 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:27.457 15:59:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:27.457 15:59:57 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:27.457 15:59:57 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:27.457 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:27.457 15:59:57 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:27.457 15:59:57 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:27.457 15:59:57 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:27.457 15:59:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.457 15:59:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:27.457 15:59:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.457 15:59:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:27.457 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:27.457 15:59:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.457 15:59:57 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:27.457 15:59:57 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:27.457 15:59:57 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:27.457 15:59:57 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:27.457 15:59:57 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:27.457 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:27.457 15:59:57 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:27.457 15:59:57 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:27.457 15:59:57 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:27.457 15:59:57 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:27.457 15:59:57 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:27.457 15:59:57 -- nvmf/common.sh@57 -- # uname 00:09:27.457 15:59:57 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:27.457 15:59:57 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:27.457 15:59:57 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:27.457 15:59:57 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:27.457 15:59:57 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:27.457 15:59:57 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:27.457 15:59:57 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:27.457 15:59:57 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:27.457 15:59:57 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:27.457 15:59:57 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:27.457 15:59:57 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:27.457 15:59:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:27.457 15:59:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:27.457 15:59:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:27.457 15:59:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:27.457 15:59:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:27.457 15:59:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:27.457 15:59:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.457 15:59:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:27.457 15:59:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:27.457 15:59:57 -- nvmf/common.sh@104 -- # continue 2 00:09:27.457 15:59:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:27.457 15:59:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.458 15:59:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:27.458 15:59:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.458 15:59:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:27.458 15:59:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:27.458 15:59:57 -- nvmf/common.sh@104 -- # continue 2 00:09:27.458 15:59:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:27.458 15:59:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:27.458 15:59:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:27.458 15:59:57 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:27.458 15:59:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:27.458 15:59:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:27.458 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:27.458 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:27.458 altname enp217s0f0np0 00:09:27.458 altname ens818f0np0 00:09:27.458 inet 192.168.100.8/24 scope global mlx_0_0 00:09:27.458 valid_lft forever preferred_lft forever 00:09:27.458 15:59:57 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:27.458 15:59:57 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:27.458 15:59:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:27.458 15:59:57 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:27.458 15:59:57 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:27.458 15:59:57 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:27.458 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:27.458 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:27.458 altname enp217s0f1np1 00:09:27.458 altname ens818f1np1 00:09:27.458 inet 192.168.100.9/24 scope global mlx_0_1 00:09:27.458 valid_lft forever preferred_lft forever 00:09:27.458 15:59:57 -- nvmf/common.sh@410 -- # return 0 00:09:27.458 15:59:57 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:27.458 15:59:57 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:27.458 15:59:57 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:27.458 15:59:57 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:27.458 15:59:57 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:27.458 15:59:57 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:27.458 15:59:57 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:27.458 15:59:57 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:27.458 15:59:57 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:27.458 15:59:57 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:27.458 15:59:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:27.458 15:59:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.458 15:59:57 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:27.458 15:59:57 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:27.458 15:59:57 -- nvmf/common.sh@104 -- # continue 2 00:09:27.458 15:59:57 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:27.458 15:59:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.458 15:59:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:27.458 15:59:57 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:27.458 15:59:57 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:27.458 15:59:57 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:27.458 15:59:57 -- nvmf/common.sh@104 -- # continue 2 00:09:27.458 15:59:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:27.458 15:59:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:27.458 15:59:57 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:27.458 15:59:57 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:27.458 15:59:57 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:27.458 15:59:57 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:27.458 15:59:57 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:27.458 15:59:57 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:27.458 192.168.100.9' 00:09:27.458 15:59:57 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:27.458 192.168.100.9' 00:09:27.458 15:59:57 -- nvmf/common.sh@445 -- # head -n 1 00:09:27.458 15:59:57 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:27.458 15:59:57 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:27.458 192.168.100.9' 00:09:27.458 15:59:57 -- nvmf/common.sh@446 -- # tail -n +2 00:09:27.458 15:59:57 -- nvmf/common.sh@446 -- # head -n 1 00:09:27.458 15:59:57 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:27.458 15:59:57 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:27.458 15:59:57 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:27.458 15:59:57 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:27.458 15:59:57 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:27.458 15:59:57 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:27.458 15:59:57 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:09:27.458 15:59:57 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:27.458 15:59:57 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:27.458 15:59:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.458 15:59:57 -- nvmf/common.sh@469 -- # nvmfpid=1225073 00:09:27.458 15:59:57 -- nvmf/common.sh@470 -- # waitforlisten 1225073 00:09:27.458 15:59:57 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:27.458 15:59:57 -- common/autotest_common.sh@829 -- # '[' -z 1225073 ']' 00:09:27.458 15:59:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.458 15:59:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:27.458 15:59:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.458 15:59:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:27.458 15:59:57 -- common/autotest_common.sh@10 -- # set +x 00:09:27.458 [2024-11-20 15:59:57.919429] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:27.458 [2024-11-20 15:59:57.919488] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:27.458 EAL: No free 2048 kB hugepages reported on node 1 00:09:27.458 [2024-11-20 15:59:57.989567] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:27.458 [2024-11-20 15:59:58.029006] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:27.458 [2024-11-20 15:59:58.029121] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:27.458 [2024-11-20 15:59:58.029131] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:27.458 [2024-11-20 15:59:58.029139] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:27.458 [2024-11-20 15:59:58.029233] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:27.458 [2024-11-20 15:59:58.029328] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:27.458 [2024-11-20 15:59:58.029416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:27.458 [2024-11-20 15:59:58.029417] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.028 15:59:58 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:28.028 15:59:58 -- common/autotest_common.sh@862 -- # return 0 00:09:28.028 15:59:58 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:28.028 15:59:58 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:28.028 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.028 15:59:58 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:28.028 15:59:58 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:09:28.028 15:59:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.028 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.028 [2024-11-20 15:59:58.802744] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21a70d0/0x21ab5a0) succeed. 00:09:28.028 [2024-11-20 15:59:58.811857] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21a8670/0x21ecc40) succeed. 00:09:28.287 15:59:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.287 15:59:58 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t rdma -a 192.168.100.8 -s 8009 discovery 00:09:28.287 15:59:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.287 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.287 [2024-11-20 15:59:58.934072] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 8009 *** 00:09:28.287 15:59:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.287 15:59:58 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 00:09:28.287 15:59:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.287 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.287 15:59:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.287 15:59:58 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.3 -s 4430 00:09:28.287 15:59:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.287 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.287 15:59:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.287 15:59:58 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.4 -s 4430 00:09:28.287 15:59:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.287 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.287 15:59:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.287 15:59:58 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:28.287 15:59:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.287 15:59:58 -- common/autotest_common.sh@10 -- # set +x 00:09:28.287 15:59:58 -- target/referrals.sh@48 -- # jq length 00:09:28.287 15:59:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.287 15:59:59 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:09:28.287 15:59:59 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:09:28.287 15:59:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:28.287 15:59:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:28.287 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.287 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.287 15:59:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:28.287 15:59:59 -- target/referrals.sh@21 -- # sort 00:09:28.287 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.287 15:59:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:28.287 15:59:59 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:28.287 15:59:59 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:09:28.287 15:59:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:28.287 15:59:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:28.287 15:59:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.287 15:59:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:28.287 15:59:59 -- target/referrals.sh@26 -- # sort 00:09:28.545 15:59:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:09:28.545 15:59:59 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:09:28.545 15:59:59 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 00:09:28.545 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.545 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.545 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.545 15:59:59 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.3 -s 4430 00:09:28.545 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.545 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.545 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.545 15:59:59 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.4 -s 4430 00:09:28.545 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.545 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.545 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.545 15:59:59 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:28.545 15:59:59 -- target/referrals.sh@56 -- # jq length 00:09:28.545 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.545 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.545 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.545 15:59:59 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:09:28.545 15:59:59 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:09:28.546 15:59:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:28.546 15:59:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:28.546 15:59:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.546 15:59:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:28.546 15:59:59 -- target/referrals.sh@26 -- # sort 00:09:28.804 15:59:59 -- target/referrals.sh@26 -- # echo 00:09:28.804 15:59:59 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:09:28.804 15:59:59 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n discovery 00:09:28.804 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.804 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.804 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.804 15:59:59 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:28.804 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.804 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.804 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.804 15:59:59 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:09:28.804 15:59:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:28.804 15:59:59 -- target/referrals.sh@21 -- # sort 00:09:28.804 15:59:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:28.804 15:59:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:28.804 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.804 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:28.804 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.804 15:59:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:09:28.804 15:59:59 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:28.804 15:59:59 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:09:28.804 15:59:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:28.804 15:59:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:28.804 15:59:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.804 15:59:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:28.804 15:59:59 -- target/referrals.sh@26 -- # sort 00:09:28.804 15:59:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:09:28.804 15:59:59 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:09:28.804 15:59:59 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:09:28.804 15:59:59 -- target/referrals.sh@67 -- # jq -r .subnqn 00:09:28.804 15:59:59 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:28.804 15:59:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:28.804 15:59:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:29.063 15:59:59 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:09:29.063 15:59:59 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:09:29.063 15:59:59 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:29.063 15:59:59 -- target/referrals.sh@68 -- # jq -r .subnqn 00:09:29.063 15:59:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:29.063 15:59:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:29.063 15:59:59 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:29.063 15:59:59 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:09:29.063 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.063 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:29.063 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.063 15:59:59 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:09:29.063 15:59:59 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:09:29.063 15:59:59 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:29.063 15:59:59 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:09:29.063 15:59:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.063 15:59:59 -- common/autotest_common.sh@10 -- # set +x 00:09:29.063 15:59:59 -- target/referrals.sh@21 -- # sort 00:09:29.063 15:59:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.063 15:59:59 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:09:29.063 15:59:59 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:29.063 15:59:59 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:09:29.063 15:59:59 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:29.063 15:59:59 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:29.063 15:59:59 -- target/referrals.sh@26 -- # sort 00:09:29.063 15:59:59 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:29.063 15:59:59 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:29.322 15:59:59 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:09:29.322 15:59:59 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:09:29.322 15:59:59 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:09:29.322 15:59:59 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:09:29.322 15:59:59 -- target/referrals.sh@75 -- # jq -r .subnqn 00:09:29.322 15:59:59 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:29.322 15:59:59 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:09:29.322 16:00:00 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:09:29.322 16:00:00 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:09:29.322 16:00:00 -- target/referrals.sh@76 -- # jq -r .subnqn 00:09:29.322 16:00:00 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:09:29.322 16:00:00 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:29.322 16:00:00 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:09:29.652 16:00:00 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:09:29.652 16:00:00 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t rdma -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:09:29.652 16:00:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.652 16:00:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.652 16:00:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.652 16:00:00 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:09:29.652 16:00:00 -- target/referrals.sh@82 -- # jq length 00:09:29.652 16:00:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.652 16:00:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.652 16:00:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.652 16:00:00 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:09:29.652 16:00:00 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:09:29.652 16:00:00 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:09:29.652 16:00:00 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:09:29.652 16:00:00 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 8009 -o json 00:09:29.652 16:00:00 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:09:29.652 16:00:00 -- target/referrals.sh@26 -- # sort 00:09:29.652 16:00:00 -- target/referrals.sh@26 -- # echo 00:09:29.652 16:00:00 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:09:29.652 16:00:00 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:09:29.652 16:00:00 -- target/referrals.sh@86 -- # nvmftestfini 00:09:29.652 16:00:00 -- nvmf/common.sh@476 -- # nvmfcleanup 00:09:29.652 16:00:00 -- nvmf/common.sh@116 -- # sync 00:09:29.652 16:00:00 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:09:29.652 16:00:00 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:09:29.652 16:00:00 -- nvmf/common.sh@119 -- # set +e 00:09:29.652 16:00:00 -- nvmf/common.sh@120 -- # for i in {1..20} 00:09:29.652 16:00:00 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:09:29.652 rmmod nvme_rdma 00:09:29.652 rmmod nvme_fabrics 00:09:29.652 16:00:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:09:29.652 16:00:00 -- nvmf/common.sh@123 -- # set -e 00:09:29.652 16:00:00 -- nvmf/common.sh@124 -- # return 0 00:09:29.652 16:00:00 -- nvmf/common.sh@477 -- # '[' -n 1225073 ']' 00:09:29.652 16:00:00 -- nvmf/common.sh@478 -- # killprocess 1225073 00:09:29.652 16:00:00 -- common/autotest_common.sh@936 -- # '[' -z 1225073 ']' 00:09:29.652 16:00:00 -- common/autotest_common.sh@940 -- # kill -0 1225073 00:09:29.652 16:00:00 -- common/autotest_common.sh@941 -- # uname 00:09:29.652 16:00:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:09:29.652 16:00:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1225073 00:09:29.940 16:00:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:09:29.940 16:00:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:09:29.940 16:00:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1225073' 00:09:29.940 killing process with pid 1225073 00:09:29.940 16:00:00 -- common/autotest_common.sh@955 -- # kill 1225073 00:09:29.940 16:00:00 -- common/autotest_common.sh@960 -- # wait 1225073 00:09:29.940 16:00:00 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:09:29.940 16:00:00 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:09:29.940 00:09:29.940 real 0m9.391s 00:09:29.940 user 0m13.218s 00:09:29.940 sys 0m5.823s 00:09:29.940 16:00:00 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:09:29.940 16:00:00 -- common/autotest_common.sh@10 -- # set +x 00:09:29.940 ************************************ 00:09:29.940 END TEST nvmf_referrals 00:09:29.940 ************************************ 00:09:30.200 16:00:00 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:30.200 16:00:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:09:30.200 16:00:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:09:30.200 16:00:00 -- common/autotest_common.sh@10 -- # set +x 00:09:30.200 ************************************ 00:09:30.200 START TEST nvmf_connect_disconnect 00:09:30.200 ************************************ 00:09:30.200 16:00:00 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=rdma 00:09:30.200 * Looking for test storage... 00:09:30.200 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:09:30.200 16:00:00 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:09:30.200 16:00:00 -- common/autotest_common.sh@1690 -- # lcov --version 00:09:30.200 16:00:00 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:09:30.200 16:00:00 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:09:30.200 16:00:00 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:09:30.200 16:00:00 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:09:30.200 16:00:00 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:09:30.200 16:00:00 -- scripts/common.sh@335 -- # IFS=.-: 00:09:30.200 16:00:00 -- scripts/common.sh@335 -- # read -ra ver1 00:09:30.200 16:00:00 -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.200 16:00:00 -- scripts/common.sh@336 -- # read -ra ver2 00:09:30.200 16:00:00 -- scripts/common.sh@337 -- # local 'op=<' 00:09:30.200 16:00:00 -- scripts/common.sh@339 -- # ver1_l=2 00:09:30.200 16:00:00 -- scripts/common.sh@340 -- # ver2_l=1 00:09:30.200 16:00:00 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:09:30.200 16:00:00 -- scripts/common.sh@343 -- # case "$op" in 00:09:30.200 16:00:00 -- scripts/common.sh@344 -- # : 1 00:09:30.200 16:00:00 -- scripts/common.sh@363 -- # (( v = 0 )) 00:09:30.200 16:00:00 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.200 16:00:00 -- scripts/common.sh@364 -- # decimal 1 00:09:30.200 16:00:00 -- scripts/common.sh@352 -- # local d=1 00:09:30.200 16:00:00 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.200 16:00:00 -- scripts/common.sh@354 -- # echo 1 00:09:30.200 16:00:00 -- scripts/common.sh@364 -- # ver1[v]=1 00:09:30.200 16:00:00 -- scripts/common.sh@365 -- # decimal 2 00:09:30.200 16:00:00 -- scripts/common.sh@352 -- # local d=2 00:09:30.200 16:00:00 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.200 16:00:00 -- scripts/common.sh@354 -- # echo 2 00:09:30.200 16:00:00 -- scripts/common.sh@365 -- # ver2[v]=2 00:09:30.200 16:00:00 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:09:30.200 16:00:00 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:09:30.200 16:00:00 -- scripts/common.sh@367 -- # return 0 00:09:30.200 16:00:00 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.200 16:00:00 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:09:30.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.200 --rc genhtml_branch_coverage=1 00:09:30.200 --rc genhtml_function_coverage=1 00:09:30.200 --rc genhtml_legend=1 00:09:30.200 --rc geninfo_all_blocks=1 00:09:30.200 --rc geninfo_unexecuted_blocks=1 00:09:30.200 00:09:30.200 ' 00:09:30.200 16:00:00 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:09:30.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.200 --rc genhtml_branch_coverage=1 00:09:30.200 --rc genhtml_function_coverage=1 00:09:30.200 --rc genhtml_legend=1 00:09:30.200 --rc geninfo_all_blocks=1 00:09:30.200 --rc geninfo_unexecuted_blocks=1 00:09:30.200 00:09:30.200 ' 00:09:30.200 16:00:00 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:09:30.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.200 --rc genhtml_branch_coverage=1 00:09:30.200 --rc genhtml_function_coverage=1 00:09:30.200 --rc genhtml_legend=1 00:09:30.200 --rc geninfo_all_blocks=1 00:09:30.200 --rc geninfo_unexecuted_blocks=1 00:09:30.200 00:09:30.200 ' 00:09:30.200 16:00:00 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:09:30.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.200 --rc genhtml_branch_coverage=1 00:09:30.200 --rc genhtml_function_coverage=1 00:09:30.200 --rc genhtml_legend=1 00:09:30.200 --rc geninfo_all_blocks=1 00:09:30.200 --rc geninfo_unexecuted_blocks=1 00:09:30.200 00:09:30.200 ' 00:09:30.200 16:00:00 -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.200 16:00:00 -- nvmf/common.sh@7 -- # uname -s 00:09:30.200 16:00:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.200 16:00:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.200 16:00:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.200 16:00:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.200 16:00:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.200 16:00:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.200 16:00:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.200 16:00:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.200 16:00:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.200 16:00:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.200 16:00:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:09:30.200 16:00:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:09:30.200 16:00:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.200 16:00:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.200 16:00:00 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.200 16:00:00 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:09:30.200 16:00:00 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.200 16:00:00 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.200 16:00:00 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.200 16:00:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.200 16:00:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.200 16:00:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.200 16:00:00 -- paths/export.sh@5 -- # export PATH 00:09:30.200 16:00:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.200 16:00:00 -- nvmf/common.sh@46 -- # : 0 00:09:30.200 16:00:00 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:09:30.200 16:00:00 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:09:30.200 16:00:00 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:09:30.200 16:00:00 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.200 16:00:00 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.200 16:00:00 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:09:30.200 16:00:00 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:09:30.200 16:00:00 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:09:30.200 16:00:00 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.200 16:00:00 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.200 16:00:00 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:09:30.200 16:00:00 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:09:30.200 16:00:00 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.200 16:00:00 -- nvmf/common.sh@436 -- # prepare_net_devs 00:09:30.200 16:00:00 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:09:30.200 16:00:00 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:09:30.200 16:00:00 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.200 16:00:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:30.200 16:00:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.200 16:00:00 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:09:30.200 16:00:00 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:09:30.200 16:00:00 -- nvmf/common.sh@284 -- # xtrace_disable 00:09:30.200 16:00:00 -- common/autotest_common.sh@10 -- # set +x 00:09:36.772 16:00:07 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:09:36.772 16:00:07 -- nvmf/common.sh@290 -- # pci_devs=() 00:09:36.772 16:00:07 -- nvmf/common.sh@290 -- # local -a pci_devs 00:09:36.772 16:00:07 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:09:36.772 16:00:07 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:09:36.772 16:00:07 -- nvmf/common.sh@292 -- # pci_drivers=() 00:09:36.772 16:00:07 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:09:36.772 16:00:07 -- nvmf/common.sh@294 -- # net_devs=() 00:09:36.772 16:00:07 -- nvmf/common.sh@294 -- # local -ga net_devs 00:09:36.772 16:00:07 -- nvmf/common.sh@295 -- # e810=() 00:09:36.772 16:00:07 -- nvmf/common.sh@295 -- # local -ga e810 00:09:36.772 16:00:07 -- nvmf/common.sh@296 -- # x722=() 00:09:36.772 16:00:07 -- nvmf/common.sh@296 -- # local -ga x722 00:09:36.772 16:00:07 -- nvmf/common.sh@297 -- # mlx=() 00:09:36.772 16:00:07 -- nvmf/common.sh@297 -- # local -ga mlx 00:09:36.772 16:00:07 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:36.772 16:00:07 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:09:36.772 16:00:07 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:09:36.772 16:00:07 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:09:36.772 16:00:07 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:09:36.772 16:00:07 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:09:36.772 16:00:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:36.772 16:00:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:09:36.772 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:09:36.772 16:00:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:36.772 16:00:07 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:09:36.772 16:00:07 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:09:36.772 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:09:36.772 16:00:07 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:09:36.772 16:00:07 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:09:36.772 16:00:07 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:36.772 16:00:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.772 16:00:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:36.772 16:00:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.772 16:00:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:09:36.772 Found net devices under 0000:d9:00.0: mlx_0_0 00:09:36.772 16:00:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.772 16:00:07 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:09:36.772 16:00:07 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:36.772 16:00:07 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:09:36.772 16:00:07 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:36.772 16:00:07 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:09:36.772 Found net devices under 0000:d9:00.1: mlx_0_1 00:09:36.772 16:00:07 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:09:36.772 16:00:07 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:09:36.772 16:00:07 -- nvmf/common.sh@402 -- # is_hw=yes 00:09:36.772 16:00:07 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@408 -- # rdma_device_init 00:09:36.772 16:00:07 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:09:36.772 16:00:07 -- nvmf/common.sh@57 -- # uname 00:09:36.772 16:00:07 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:09:36.772 16:00:07 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:09:36.772 16:00:07 -- nvmf/common.sh@62 -- # modprobe ib_core 00:09:36.772 16:00:07 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:09:36.772 16:00:07 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:09:36.772 16:00:07 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:09:36.772 16:00:07 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:09:36.772 16:00:07 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:09:36.772 16:00:07 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:09:36.772 16:00:07 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:09:36.772 16:00:07 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:09:36.772 16:00:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:36.772 16:00:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:36.772 16:00:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:36.772 16:00:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:36.772 16:00:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:36.772 16:00:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:36.772 16:00:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.772 16:00:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:36.772 16:00:07 -- nvmf/common.sh@104 -- # continue 2 00:09:36.772 16:00:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:36.772 16:00:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.772 16:00:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.772 16:00:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:36.772 16:00:07 -- nvmf/common.sh@104 -- # continue 2 00:09:36.772 16:00:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:36.772 16:00:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:09:36.772 16:00:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:36.772 16:00:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:36.772 16:00:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:36.772 16:00:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:36.772 16:00:07 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:09:36.772 16:00:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:09:36.772 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:36.772 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:09:36.772 altname enp217s0f0np0 00:09:36.772 altname ens818f0np0 00:09:36.772 inet 192.168.100.8/24 scope global mlx_0_0 00:09:36.772 valid_lft forever preferred_lft forever 00:09:36.772 16:00:07 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:09:36.772 16:00:07 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:09:36.772 16:00:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:36.772 16:00:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:36.772 16:00:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:36.772 16:00:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:36.772 16:00:07 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:09:36.772 16:00:07 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:09:36.772 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:09:36.772 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:09:36.772 altname enp217s0f1np1 00:09:36.772 altname ens818f1np1 00:09:36.772 inet 192.168.100.9/24 scope global mlx_0_1 00:09:36.772 valid_lft forever preferred_lft forever 00:09:36.772 16:00:07 -- nvmf/common.sh@410 -- # return 0 00:09:36.772 16:00:07 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:09:36.772 16:00:07 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:09:36.772 16:00:07 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:09:36.772 16:00:07 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:09:36.772 16:00:07 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:09:36.772 16:00:07 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:09:36.773 16:00:07 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:09:36.773 16:00:07 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:09:36.773 16:00:07 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:09:36.773 16:00:07 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:09:36.773 16:00:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:36.773 16:00:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.773 16:00:07 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:09:36.773 16:00:07 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:09:36.773 16:00:07 -- nvmf/common.sh@104 -- # continue 2 00:09:36.773 16:00:07 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:09:36.773 16:00:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.773 16:00:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:09:36.773 16:00:07 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:09:36.773 16:00:07 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:09:36.773 16:00:07 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:09:36.773 16:00:07 -- nvmf/common.sh@104 -- # continue 2 00:09:36.773 16:00:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:36.773 16:00:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:09:36.773 16:00:07 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:09:36.773 16:00:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:09:36.773 16:00:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:36.773 16:00:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:36.773 16:00:07 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:09:36.773 16:00:07 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:09:36.773 16:00:07 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:09:36.773 16:00:07 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:09:36.773 16:00:07 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:09:36.773 16:00:07 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:09:36.773 16:00:07 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:09:36.773 192.168.100.9' 00:09:36.773 16:00:07 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:09:36.773 192.168.100.9' 00:09:36.773 16:00:07 -- nvmf/common.sh@445 -- # head -n 1 00:09:36.773 16:00:07 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:09:36.773 16:00:07 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:09:36.773 192.168.100.9' 00:09:36.773 16:00:07 -- nvmf/common.sh@446 -- # tail -n +2 00:09:36.773 16:00:07 -- nvmf/common.sh@446 -- # head -n 1 00:09:36.773 16:00:07 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:09:36.773 16:00:07 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:09:36.773 16:00:07 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:09:36.773 16:00:07 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:09:36.773 16:00:07 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:09:36.773 16:00:07 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:09:36.773 16:00:07 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:09:36.773 16:00:07 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:09:36.773 16:00:07 -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:36.773 16:00:07 -- common/autotest_common.sh@10 -- # set +x 00:09:36.773 16:00:07 -- nvmf/common.sh@469 -- # nvmfpid=1229043 00:09:36.773 16:00:07 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:36.773 16:00:07 -- nvmf/common.sh@470 -- # waitforlisten 1229043 00:09:36.773 16:00:07 -- common/autotest_common.sh@829 -- # '[' -z 1229043 ']' 00:09:36.773 16:00:07 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:36.773 16:00:07 -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:36.773 16:00:07 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:36.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:36.773 16:00:07 -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:36.773 16:00:07 -- common/autotest_common.sh@10 -- # set +x 00:09:36.773 [2024-11-20 16:00:07.505576] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:09:36.773 [2024-11-20 16:00:07.505623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:36.773 EAL: No free 2048 kB hugepages reported on node 1 00:09:36.773 [2024-11-20 16:00:07.574530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:37.031 [2024-11-20 16:00:07.612084] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:09:37.031 [2024-11-20 16:00:07.612216] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:37.031 [2024-11-20 16:00:07.612226] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:37.031 [2024-11-20 16:00:07.612235] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:37.031 [2024-11-20 16:00:07.612277] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:09:37.031 [2024-11-20 16:00:07.612377] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:09:37.031 [2024-11-20 16:00:07.612466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:09:37.031 [2024-11-20 16:00:07.612467] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.600 16:00:08 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:37.600 16:00:08 -- common/autotest_common.sh@862 -- # return 0 00:09:37.600 16:00:08 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:09:37.600 16:00:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:37.600 16:00:08 -- common/autotest_common.sh@10 -- # set +x 00:09:37.600 16:00:08 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:37.600 16:00:08 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -c 0 00:09:37.600 16:00:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.600 16:00:08 -- common/autotest_common.sh@10 -- # set +x 00:09:37.600 [2024-11-20 16:00:08.387031] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:09:37.860 [2024-11-20 16:00:08.407937] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdbf0f0/0xdc35c0) succeed. 00:09:37.860 [2024-11-20 16:00:08.416995] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdc0690/0xe04c60) succeed. 00:09:37.860 16:00:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.860 16:00:08 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:09:37.860 16:00:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.860 16:00:08 -- common/autotest_common.sh@10 -- # set +x 00:09:37.860 16:00:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.860 16:00:08 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:09:37.860 16:00:08 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:37.860 16:00:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.860 16:00:08 -- common/autotest_common.sh@10 -- # set +x 00:09:37.860 16:00:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.860 16:00:08 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:37.860 16:00:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.860 16:00:08 -- common/autotest_common.sh@10 -- # set +x 00:09:37.860 16:00:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.860 16:00:08 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:09:37.860 16:00:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:37.860 16:00:08 -- common/autotest_common.sh@10 -- # set +x 00:09:37.860 [2024-11-20 16:00:08.560658] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:09:37.860 16:00:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:37.860 16:00:08 -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:09:37.860 16:00:08 -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:09:37.860 16:00:08 -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:09:37.860 16:00:08 -- target/connect_disconnect.sh@34 -- # set +x 00:09:41.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.446 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.738 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.868 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:00.158 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.450 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:06.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.033 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.854 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:19.163 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.548 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:25.079 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:28.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:41.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:44.040 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.326 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:50.612 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.004 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:06.286 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:09.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:16.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:18.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:22.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:25.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.725 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:32.016 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.844 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.134 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.440 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.734 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:50.271 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:53.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.856 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:00.170 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.463 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.999 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.291 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:15.870 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.162 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.699 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.991 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.574 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.404 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.693 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.981 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.270 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:53.228 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:56.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.806 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.389 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.925 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.214 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.501 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.076 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.364 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.898 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.468 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:37.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.374 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.662 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:46.950 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:53.531 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:56.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.351 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.641 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:05.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.222 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:12.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.057 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.355 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:24.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.240 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.883 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.177 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:37.473 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:40.767 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.599 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:49.891 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.186 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.186 16:05:23 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:53.186 16:05:23 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:53.186 16:05:23 -- nvmf/common.sh@476 -- # nvmfcleanup 00:14:53.186 16:05:23 -- nvmf/common.sh@116 -- # sync 00:14:53.186 16:05:23 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:14:53.186 16:05:23 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:14:53.186 16:05:23 -- nvmf/common.sh@119 -- # set +e 00:14:53.186 16:05:23 -- nvmf/common.sh@120 -- # for i in {1..20} 00:14:53.186 16:05:23 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:14:53.186 rmmod nvme_rdma 00:14:53.186 rmmod nvme_fabrics 00:14:53.186 16:05:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:14:53.186 16:05:23 -- nvmf/common.sh@123 -- # set -e 00:14:53.186 16:05:23 -- nvmf/common.sh@124 -- # return 0 00:14:53.186 16:05:23 -- nvmf/common.sh@477 -- # '[' -n 1229043 ']' 00:14:53.186 16:05:23 -- nvmf/common.sh@478 -- # killprocess 1229043 00:14:53.186 16:05:23 -- common/autotest_common.sh@936 -- # '[' -z 1229043 ']' 00:14:53.186 16:05:23 -- common/autotest_common.sh@940 -- # kill -0 1229043 00:14:53.186 16:05:23 -- common/autotest_common.sh@941 -- # uname 00:14:53.186 16:05:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:14:53.186 16:05:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1229043 00:14:53.186 16:05:23 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:14:53.186 16:05:23 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:14:53.186 16:05:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1229043' 00:14:53.186 killing process with pid 1229043 00:14:53.186 16:05:23 -- common/autotest_common.sh@955 -- # kill 1229043 00:14:53.186 16:05:23 -- common/autotest_common.sh@960 -- # wait 1229043 00:14:53.445 16:05:24 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:14:53.445 16:05:24 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:14:53.445 00:14:53.445 real 5m23.241s 00:14:53.445 user 21m2.898s 00:14:53.445 sys 0m17.634s 00:14:53.445 16:05:24 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:14:53.445 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:14:53.445 ************************************ 00:14:53.445 END TEST nvmf_connect_disconnect 00:14:53.445 ************************************ 00:14:53.445 16:05:24 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:53.445 16:05:24 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:14:53.445 16:05:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:14:53.445 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:14:53.445 ************************************ 00:14:53.445 START TEST nvmf_multitarget 00:14:53.445 ************************************ 00:14:53.445 16:05:24 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=rdma 00:14:53.445 * Looking for test storage... 00:14:53.445 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:14:53.445 16:05:24 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:14:53.445 16:05:24 -- common/autotest_common.sh@1690 -- # lcov --version 00:14:53.445 16:05:24 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:14:53.445 16:05:24 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:14:53.445 16:05:24 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:14:53.445 16:05:24 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:14:53.445 16:05:24 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:14:53.445 16:05:24 -- scripts/common.sh@335 -- # IFS=.-: 00:14:53.445 16:05:24 -- scripts/common.sh@335 -- # read -ra ver1 00:14:53.445 16:05:24 -- scripts/common.sh@336 -- # IFS=.-: 00:14:53.445 16:05:24 -- scripts/common.sh@336 -- # read -ra ver2 00:14:53.445 16:05:24 -- scripts/common.sh@337 -- # local 'op=<' 00:14:53.445 16:05:24 -- scripts/common.sh@339 -- # ver1_l=2 00:14:53.445 16:05:24 -- scripts/common.sh@340 -- # ver2_l=1 00:14:53.445 16:05:24 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:14:53.445 16:05:24 -- scripts/common.sh@343 -- # case "$op" in 00:14:53.445 16:05:24 -- scripts/common.sh@344 -- # : 1 00:14:53.445 16:05:24 -- scripts/common.sh@363 -- # (( v = 0 )) 00:14:53.445 16:05:24 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:53.445 16:05:24 -- scripts/common.sh@364 -- # decimal 1 00:14:53.705 16:05:24 -- scripts/common.sh@352 -- # local d=1 00:14:53.705 16:05:24 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:53.705 16:05:24 -- scripts/common.sh@354 -- # echo 1 00:14:53.705 16:05:24 -- scripts/common.sh@364 -- # ver1[v]=1 00:14:53.705 16:05:24 -- scripts/common.sh@365 -- # decimal 2 00:14:53.705 16:05:24 -- scripts/common.sh@352 -- # local d=2 00:14:53.705 16:05:24 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:53.705 16:05:24 -- scripts/common.sh@354 -- # echo 2 00:14:53.705 16:05:24 -- scripts/common.sh@365 -- # ver2[v]=2 00:14:53.705 16:05:24 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:14:53.705 16:05:24 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:14:53.705 16:05:24 -- scripts/common.sh@367 -- # return 0 00:14:53.706 16:05:24 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:53.706 16:05:24 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:14:53.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.706 --rc genhtml_branch_coverage=1 00:14:53.706 --rc genhtml_function_coverage=1 00:14:53.706 --rc genhtml_legend=1 00:14:53.706 --rc geninfo_all_blocks=1 00:14:53.706 --rc geninfo_unexecuted_blocks=1 00:14:53.706 00:14:53.706 ' 00:14:53.706 16:05:24 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:14:53.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.706 --rc genhtml_branch_coverage=1 00:14:53.706 --rc genhtml_function_coverage=1 00:14:53.706 --rc genhtml_legend=1 00:14:53.706 --rc geninfo_all_blocks=1 00:14:53.706 --rc geninfo_unexecuted_blocks=1 00:14:53.706 00:14:53.706 ' 00:14:53.706 16:05:24 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:14:53.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.706 --rc genhtml_branch_coverage=1 00:14:53.706 --rc genhtml_function_coverage=1 00:14:53.706 --rc genhtml_legend=1 00:14:53.706 --rc geninfo_all_blocks=1 00:14:53.706 --rc geninfo_unexecuted_blocks=1 00:14:53.706 00:14:53.706 ' 00:14:53.706 16:05:24 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:14:53.706 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:53.706 --rc genhtml_branch_coverage=1 00:14:53.706 --rc genhtml_function_coverage=1 00:14:53.706 --rc genhtml_legend=1 00:14:53.706 --rc geninfo_all_blocks=1 00:14:53.706 --rc geninfo_unexecuted_blocks=1 00:14:53.706 00:14:53.706 ' 00:14:53.706 16:05:24 -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:14:53.706 16:05:24 -- nvmf/common.sh@7 -- # uname -s 00:14:53.706 16:05:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:53.706 16:05:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:53.706 16:05:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:53.706 16:05:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:53.706 16:05:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:53.706 16:05:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:53.706 16:05:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:53.706 16:05:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:53.706 16:05:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:53.706 16:05:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:53.706 16:05:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:14:53.706 16:05:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:14:53.706 16:05:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:53.706 16:05:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:53.706 16:05:24 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:53.706 16:05:24 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:14:53.706 16:05:24 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:53.706 16:05:24 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:53.706 16:05:24 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:53.706 16:05:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.706 16:05:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.706 16:05:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.706 16:05:24 -- paths/export.sh@5 -- # export PATH 00:14:53.706 16:05:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:53.706 16:05:24 -- nvmf/common.sh@46 -- # : 0 00:14:53.706 16:05:24 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:14:53.706 16:05:24 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:14:53.706 16:05:24 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:14:53.706 16:05:24 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:53.706 16:05:24 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:53.706 16:05:24 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:14:53.706 16:05:24 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:14:53.706 16:05:24 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:14:53.706 16:05:24 -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:53.706 16:05:24 -- target/multitarget.sh@15 -- # nvmftestinit 00:14:53.706 16:05:24 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:14:53.706 16:05:24 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:53.706 16:05:24 -- nvmf/common.sh@436 -- # prepare_net_devs 00:14:53.706 16:05:24 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:14:53.706 16:05:24 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:14:53.706 16:05:24 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:53.706 16:05:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:53.706 16:05:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.706 16:05:24 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:14:53.706 16:05:24 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:14:53.706 16:05:24 -- nvmf/common.sh@284 -- # xtrace_disable 00:14:53.706 16:05:24 -- common/autotest_common.sh@10 -- # set +x 00:15:00.284 16:05:30 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:00.284 16:05:30 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:00.284 16:05:30 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:00.284 16:05:30 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:00.284 16:05:30 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:00.284 16:05:30 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:00.284 16:05:30 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:00.284 16:05:30 -- nvmf/common.sh@294 -- # net_devs=() 00:15:00.284 16:05:30 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:00.284 16:05:30 -- nvmf/common.sh@295 -- # e810=() 00:15:00.284 16:05:30 -- nvmf/common.sh@295 -- # local -ga e810 00:15:00.284 16:05:30 -- nvmf/common.sh@296 -- # x722=() 00:15:00.284 16:05:30 -- nvmf/common.sh@296 -- # local -ga x722 00:15:00.284 16:05:30 -- nvmf/common.sh@297 -- # mlx=() 00:15:00.284 16:05:30 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:00.284 16:05:30 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:00.284 16:05:30 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:00.284 16:05:30 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:00.284 16:05:30 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:00.284 16:05:30 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:00.284 16:05:30 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:00.284 16:05:30 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:00.284 16:05:30 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:00.284 16:05:30 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:00.284 16:05:30 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:00.285 16:05:30 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:00.285 16:05:30 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:00.285 16:05:30 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:00.285 16:05:30 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:00.285 16:05:30 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:00.285 16:05:30 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:00.285 16:05:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:00.285 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:00.285 16:05:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:00.285 16:05:30 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:00.285 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:00.285 16:05:30 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:00.285 16:05:30 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:00.285 16:05:30 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.285 16:05:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:00.285 16:05:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.285 16:05:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:00.285 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:00.285 16:05:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.285 16:05:30 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:00.285 16:05:30 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:00.285 16:05:30 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:00.285 16:05:30 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:00.285 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:00.285 16:05:30 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:00.285 16:05:30 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:00.285 16:05:30 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:00.285 16:05:30 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:00.285 16:05:30 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:00.285 16:05:30 -- nvmf/common.sh@57 -- # uname 00:15:00.285 16:05:30 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:00.285 16:05:30 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:00.285 16:05:30 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:00.285 16:05:30 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:00.285 16:05:30 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:00.285 16:05:30 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:00.285 16:05:30 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:00.285 16:05:30 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:00.285 16:05:30 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:00.285 16:05:30 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:00.285 16:05:30 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:00.285 16:05:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:00.285 16:05:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:00.285 16:05:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:00.285 16:05:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:00.285 16:05:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:00.285 16:05:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:00.285 16:05:30 -- nvmf/common.sh@104 -- # continue 2 00:15:00.285 16:05:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:00.285 16:05:30 -- nvmf/common.sh@104 -- # continue 2 00:15:00.285 16:05:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:00.285 16:05:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:00.285 16:05:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:00.285 16:05:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:00.285 16:05:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:00.285 16:05:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:00.285 16:05:30 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:00.285 16:05:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:00.285 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:00.285 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:00.285 altname enp217s0f0np0 00:15:00.285 altname ens818f0np0 00:15:00.285 inet 192.168.100.8/24 scope global mlx_0_0 00:15:00.285 valid_lft forever preferred_lft forever 00:15:00.285 16:05:30 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:00.285 16:05:30 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:00.285 16:05:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:00.285 16:05:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:00.285 16:05:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:00.285 16:05:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:00.285 16:05:30 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:00.285 16:05:30 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:00.285 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:00.285 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:00.285 altname enp217s0f1np1 00:15:00.285 altname ens818f1np1 00:15:00.285 inet 192.168.100.9/24 scope global mlx_0_1 00:15:00.285 valid_lft forever preferred_lft forever 00:15:00.285 16:05:30 -- nvmf/common.sh@410 -- # return 0 00:15:00.285 16:05:30 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:00.285 16:05:30 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:00.285 16:05:30 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:00.285 16:05:30 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:00.285 16:05:30 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:00.285 16:05:30 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:00.285 16:05:30 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:00.285 16:05:30 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:00.285 16:05:30 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:00.285 16:05:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:00.285 16:05:30 -- nvmf/common.sh@104 -- # continue 2 00:15:00.285 16:05:30 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:00.285 16:05:30 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:00.285 16:05:30 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:00.285 16:05:30 -- nvmf/common.sh@104 -- # continue 2 00:15:00.285 16:05:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:00.285 16:05:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:00.286 16:05:30 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:00.286 16:05:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:00.286 16:05:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:00.286 16:05:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:00.286 16:05:30 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:00.286 16:05:30 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:00.286 16:05:30 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:00.286 16:05:30 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:00.286 16:05:30 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:00.286 16:05:30 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:00.286 16:05:30 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:00.286 192.168.100.9' 00:15:00.286 16:05:30 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:00.286 192.168.100.9' 00:15:00.286 16:05:30 -- nvmf/common.sh@445 -- # head -n 1 00:15:00.286 16:05:30 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:00.286 16:05:30 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:00.286 192.168.100.9' 00:15:00.286 16:05:30 -- nvmf/common.sh@446 -- # tail -n +2 00:15:00.286 16:05:30 -- nvmf/common.sh@446 -- # head -n 1 00:15:00.286 16:05:30 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:00.286 16:05:30 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:00.286 16:05:30 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:00.286 16:05:30 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:00.286 16:05:30 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:00.286 16:05:30 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:00.286 16:05:30 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:15:00.286 16:05:30 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:00.286 16:05:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:00.286 16:05:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.286 16:05:30 -- nvmf/common.sh@469 -- # nvmfpid=1289392 00:15:00.286 16:05:30 -- nvmf/common.sh@470 -- # waitforlisten 1289392 00:15:00.286 16:05:30 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:00.286 16:05:30 -- common/autotest_common.sh@829 -- # '[' -z 1289392 ']' 00:15:00.286 16:05:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:00.286 16:05:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:00.286 16:05:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:00.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:00.286 16:05:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:00.286 16:05:30 -- common/autotest_common.sh@10 -- # set +x 00:15:00.286 [2024-11-20 16:05:30.990553] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:00.286 [2024-11-20 16:05:30.990609] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:00.286 EAL: No free 2048 kB hugepages reported on node 1 00:15:00.286 [2024-11-20 16:05:31.062138] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:00.546 [2024-11-20 16:05:31.100786] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:00.546 [2024-11-20 16:05:31.100907] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:00.546 [2024-11-20 16:05:31.100917] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:00.546 [2024-11-20 16:05:31.100926] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:00.546 [2024-11-20 16:05:31.100976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:00.546 [2024-11-20 16:05:31.101090] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:00.546 [2024-11-20 16:05:31.101108] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:00.546 [2024-11-20 16:05:31.101110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:01.116 16:05:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:01.116 16:05:31 -- common/autotest_common.sh@862 -- # return 0 00:15:01.116 16:05:31 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:01.116 16:05:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:01.116 16:05:31 -- common/autotest_common.sh@10 -- # set +x 00:15:01.116 16:05:31 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:01.116 16:05:31 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:01.116 16:05:31 -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:01.116 16:05:31 -- target/multitarget.sh@21 -- # jq length 00:15:01.376 16:05:31 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:15:01.376 16:05:31 -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:15:01.376 "nvmf_tgt_1" 00:15:01.376 16:05:32 -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:15:01.376 "nvmf_tgt_2" 00:15:01.636 16:05:32 -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:01.636 16:05:32 -- target/multitarget.sh@28 -- # jq length 00:15:01.636 16:05:32 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:15:01.637 16:05:32 -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:15:01.637 true 00:15:01.637 16:05:32 -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:15:01.896 true 00:15:01.896 16:05:32 -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:15:01.896 16:05:32 -- target/multitarget.sh@35 -- # jq length 00:15:01.896 16:05:32 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:15:01.896 16:05:32 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:15:01.896 16:05:32 -- target/multitarget.sh@41 -- # nvmftestfini 00:15:01.897 16:05:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:01.897 16:05:32 -- nvmf/common.sh@116 -- # sync 00:15:01.897 16:05:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:01.897 16:05:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:01.897 16:05:32 -- nvmf/common.sh@119 -- # set +e 00:15:01.897 16:05:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:01.897 16:05:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:01.897 rmmod nvme_rdma 00:15:01.897 rmmod nvme_fabrics 00:15:01.897 16:05:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:01.897 16:05:32 -- nvmf/common.sh@123 -- # set -e 00:15:01.897 16:05:32 -- nvmf/common.sh@124 -- # return 0 00:15:01.897 16:05:32 -- nvmf/common.sh@477 -- # '[' -n 1289392 ']' 00:15:01.897 16:05:32 -- nvmf/common.sh@478 -- # killprocess 1289392 00:15:01.897 16:05:32 -- common/autotest_common.sh@936 -- # '[' -z 1289392 ']' 00:15:01.897 16:05:32 -- common/autotest_common.sh@940 -- # kill -0 1289392 00:15:01.897 16:05:32 -- common/autotest_common.sh@941 -- # uname 00:15:01.897 16:05:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:01.897 16:05:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1289392 00:15:02.157 16:05:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:02.157 16:05:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:02.157 16:05:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1289392' 00:15:02.157 killing process with pid 1289392 00:15:02.157 16:05:32 -- common/autotest_common.sh@955 -- # kill 1289392 00:15:02.157 16:05:32 -- common/autotest_common.sh@960 -- # wait 1289392 00:15:02.157 16:05:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:02.157 16:05:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:02.157 00:15:02.157 real 0m8.823s 00:15:02.157 user 0m9.918s 00:15:02.157 sys 0m5.575s 00:15:02.157 16:05:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:02.157 16:05:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.157 ************************************ 00:15:02.157 END TEST nvmf_multitarget 00:15:02.157 ************************************ 00:15:02.157 16:05:32 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:15:02.157 16:05:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:02.157 16:05:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:02.157 16:05:32 -- common/autotest_common.sh@10 -- # set +x 00:15:02.157 ************************************ 00:15:02.157 START TEST nvmf_rpc 00:15:02.157 ************************************ 00:15:02.157 16:05:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=rdma 00:15:02.417 * Looking for test storage... 00:15:02.417 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:02.417 16:05:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:02.417 16:05:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:02.417 16:05:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:02.417 16:05:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:02.417 16:05:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:02.417 16:05:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:02.417 16:05:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:02.417 16:05:33 -- scripts/common.sh@335 -- # IFS=.-: 00:15:02.417 16:05:33 -- scripts/common.sh@335 -- # read -ra ver1 00:15:02.417 16:05:33 -- scripts/common.sh@336 -- # IFS=.-: 00:15:02.417 16:05:33 -- scripts/common.sh@336 -- # read -ra ver2 00:15:02.417 16:05:33 -- scripts/common.sh@337 -- # local 'op=<' 00:15:02.417 16:05:33 -- scripts/common.sh@339 -- # ver1_l=2 00:15:02.417 16:05:33 -- scripts/common.sh@340 -- # ver2_l=1 00:15:02.417 16:05:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:02.417 16:05:33 -- scripts/common.sh@343 -- # case "$op" in 00:15:02.417 16:05:33 -- scripts/common.sh@344 -- # : 1 00:15:02.417 16:05:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:02.417 16:05:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:02.417 16:05:33 -- scripts/common.sh@364 -- # decimal 1 00:15:02.417 16:05:33 -- scripts/common.sh@352 -- # local d=1 00:15:02.417 16:05:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:02.417 16:05:33 -- scripts/common.sh@354 -- # echo 1 00:15:02.417 16:05:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:02.417 16:05:33 -- scripts/common.sh@365 -- # decimal 2 00:15:02.417 16:05:33 -- scripts/common.sh@352 -- # local d=2 00:15:02.417 16:05:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:02.417 16:05:33 -- scripts/common.sh@354 -- # echo 2 00:15:02.417 16:05:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:02.417 16:05:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:02.417 16:05:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:02.417 16:05:33 -- scripts/common.sh@367 -- # return 0 00:15:02.417 16:05:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:02.417 16:05:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.417 --rc genhtml_branch_coverage=1 00:15:02.417 --rc genhtml_function_coverage=1 00:15:02.417 --rc genhtml_legend=1 00:15:02.417 --rc geninfo_all_blocks=1 00:15:02.417 --rc geninfo_unexecuted_blocks=1 00:15:02.417 00:15:02.417 ' 00:15:02.417 16:05:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.417 --rc genhtml_branch_coverage=1 00:15:02.417 --rc genhtml_function_coverage=1 00:15:02.417 --rc genhtml_legend=1 00:15:02.417 --rc geninfo_all_blocks=1 00:15:02.417 --rc geninfo_unexecuted_blocks=1 00:15:02.417 00:15:02.417 ' 00:15:02.417 16:05:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.417 --rc genhtml_branch_coverage=1 00:15:02.417 --rc genhtml_function_coverage=1 00:15:02.417 --rc genhtml_legend=1 00:15:02.417 --rc geninfo_all_blocks=1 00:15:02.417 --rc geninfo_unexecuted_blocks=1 00:15:02.417 00:15:02.417 ' 00:15:02.417 16:05:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:02.417 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:02.417 --rc genhtml_branch_coverage=1 00:15:02.417 --rc genhtml_function_coverage=1 00:15:02.417 --rc genhtml_legend=1 00:15:02.417 --rc geninfo_all_blocks=1 00:15:02.417 --rc geninfo_unexecuted_blocks=1 00:15:02.417 00:15:02.417 ' 00:15:02.417 16:05:33 -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.417 16:05:33 -- nvmf/common.sh@7 -- # uname -s 00:15:02.417 16:05:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.417 16:05:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.417 16:05:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.417 16:05:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.417 16:05:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.417 16:05:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.417 16:05:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.417 16:05:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.417 16:05:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.417 16:05:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.417 16:05:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:02.417 16:05:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:02.417 16:05:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.417 16:05:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.417 16:05:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.417 16:05:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:02.417 16:05:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.417 16:05:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.417 16:05:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.417 16:05:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.417 16:05:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.417 16:05:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.417 16:05:33 -- paths/export.sh@5 -- # export PATH 00:15:02.417 16:05:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.417 16:05:33 -- nvmf/common.sh@46 -- # : 0 00:15:02.417 16:05:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:02.417 16:05:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:02.417 16:05:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:02.417 16:05:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.417 16:05:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.417 16:05:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:02.417 16:05:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:02.417 16:05:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:02.417 16:05:33 -- target/rpc.sh@11 -- # loops=5 00:15:02.417 16:05:33 -- target/rpc.sh@23 -- # nvmftestinit 00:15:02.417 16:05:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:02.417 16:05:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.417 16:05:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:02.417 16:05:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:02.417 16:05:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:02.417 16:05:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.417 16:05:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.417 16:05:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.417 16:05:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:02.417 16:05:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:02.417 16:05:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:02.417 16:05:33 -- common/autotest_common.sh@10 -- # set +x 00:15:08.992 16:05:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:08.992 16:05:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:08.992 16:05:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:08.992 16:05:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:08.992 16:05:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:08.992 16:05:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:08.992 16:05:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:08.992 16:05:39 -- nvmf/common.sh@294 -- # net_devs=() 00:15:08.992 16:05:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:08.992 16:05:39 -- nvmf/common.sh@295 -- # e810=() 00:15:08.992 16:05:39 -- nvmf/common.sh@295 -- # local -ga e810 00:15:08.992 16:05:39 -- nvmf/common.sh@296 -- # x722=() 00:15:08.992 16:05:39 -- nvmf/common.sh@296 -- # local -ga x722 00:15:08.992 16:05:39 -- nvmf/common.sh@297 -- # mlx=() 00:15:08.992 16:05:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:08.992 16:05:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:08.992 16:05:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:08.992 16:05:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:08.992 16:05:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:08.992 16:05:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:08.992 16:05:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:08.992 16:05:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.992 16:05:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:08.992 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:08.992 16:05:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:08.992 16:05:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:08.992 16:05:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:08.992 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:08.992 16:05:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:08.992 16:05:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:08.992 16:05:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:08.992 16:05:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.992 16:05:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.992 16:05:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.992 16:05:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.992 16:05:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:08.992 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:08.992 16:05:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.993 16:05:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:08.993 16:05:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:08.993 16:05:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:08.993 16:05:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:08.993 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:08.993 16:05:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:08.993 16:05:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:08.993 16:05:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:08.993 16:05:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:08.993 16:05:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:08.993 16:05:39 -- nvmf/common.sh@57 -- # uname 00:15:08.993 16:05:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:08.993 16:05:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:08.993 16:05:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:08.993 16:05:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:08.993 16:05:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:08.993 16:05:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:08.993 16:05:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:08.993 16:05:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:08.993 16:05:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:08.993 16:05:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:08.993 16:05:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:08.993 16:05:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:08.993 16:05:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:08.993 16:05:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:08.993 16:05:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:08.993 16:05:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:08.993 16:05:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:08.993 16:05:39 -- nvmf/common.sh@104 -- # continue 2 00:15:08.993 16:05:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:08.993 16:05:39 -- nvmf/common.sh@104 -- # continue 2 00:15:08.993 16:05:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:08.993 16:05:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:08.993 16:05:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.993 16:05:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:08.993 16:05:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:08.993 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:08.993 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:08.993 altname enp217s0f0np0 00:15:08.993 altname ens818f0np0 00:15:08.993 inet 192.168.100.8/24 scope global mlx_0_0 00:15:08.993 valid_lft forever preferred_lft forever 00:15:08.993 16:05:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:08.993 16:05:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:08.993 16:05:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.993 16:05:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:08.993 16:05:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:08.993 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:08.993 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:08.993 altname enp217s0f1np1 00:15:08.993 altname ens818f1np1 00:15:08.993 inet 192.168.100.9/24 scope global mlx_0_1 00:15:08.993 valid_lft forever preferred_lft forever 00:15:08.993 16:05:39 -- nvmf/common.sh@410 -- # return 0 00:15:08.993 16:05:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:08.993 16:05:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:08.993 16:05:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:08.993 16:05:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:08.993 16:05:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:08.993 16:05:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:08.993 16:05:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:08.993 16:05:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:08.993 16:05:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:08.993 16:05:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:08.993 16:05:39 -- nvmf/common.sh@104 -- # continue 2 00:15:08.993 16:05:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:08.993 16:05:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:08.993 16:05:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:08.993 16:05:39 -- nvmf/common.sh@104 -- # continue 2 00:15:08.993 16:05:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:08.993 16:05:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:08.993 16:05:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.993 16:05:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:08.993 16:05:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:08.993 16:05:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:08.993 16:05:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:08.993 16:05:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:08.993 192.168.100.9' 00:15:08.993 16:05:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:08.993 192.168.100.9' 00:15:08.993 16:05:39 -- nvmf/common.sh@445 -- # head -n 1 00:15:08.993 16:05:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:08.993 16:05:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:08.993 192.168.100.9' 00:15:08.993 16:05:39 -- nvmf/common.sh@446 -- # tail -n +2 00:15:08.993 16:05:39 -- nvmf/common.sh@446 -- # head -n 1 00:15:08.993 16:05:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:08.993 16:05:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:08.993 16:05:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:08.993 16:05:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:08.993 16:05:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:08.993 16:05:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:08.993 16:05:39 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:15:08.993 16:05:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:08.993 16:05:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:08.993 16:05:39 -- common/autotest_common.sh@10 -- # set +x 00:15:08.993 16:05:39 -- nvmf/common.sh@469 -- # nvmfpid=1292997 00:15:08.993 16:05:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:08.993 16:05:39 -- nvmf/common.sh@470 -- # waitforlisten 1292997 00:15:08.993 16:05:39 -- common/autotest_common.sh@829 -- # '[' -z 1292997 ']' 00:15:08.993 16:05:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:08.993 16:05:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:08.993 16:05:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:08.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:08.993 16:05:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:08.993 16:05:39 -- common/autotest_common.sh@10 -- # set +x 00:15:09.253 [2024-11-20 16:05:39.818506] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:09.253 [2024-11-20 16:05:39.818574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:09.253 EAL: No free 2048 kB hugepages reported on node 1 00:15:09.253 [2024-11-20 16:05:39.890327] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:09.253 [2024-11-20 16:05:39.927770] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:09.253 [2024-11-20 16:05:39.927882] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:09.253 [2024-11-20 16:05:39.927891] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:09.253 [2024-11-20 16:05:39.927900] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:09.253 [2024-11-20 16:05:39.927953] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:09.253 [2024-11-20 16:05:39.928063] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:09.253 [2024-11-20 16:05:39.928149] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:09.253 [2024-11-20 16:05:39.928151] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.191 16:05:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.191 16:05:40 -- common/autotest_common.sh@862 -- # return 0 00:15:10.191 16:05:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:10.191 16:05:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:10.191 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:15:10.191 16:05:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.191 16:05:40 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:15:10.191 16:05:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.191 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:15:10.191 16:05:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.191 16:05:40 -- target/rpc.sh@26 -- # stats='{ 00:15:10.191 "tick_rate": 2500000000, 00:15:10.191 "poll_groups": [ 00:15:10.191 { 00:15:10.191 "name": "nvmf_tgt_poll_group_0", 00:15:10.191 "admin_qpairs": 0, 00:15:10.191 "io_qpairs": 0, 00:15:10.191 "current_admin_qpairs": 0, 00:15:10.191 "current_io_qpairs": 0, 00:15:10.191 "pending_bdev_io": 0, 00:15:10.191 "completed_nvme_io": 0, 00:15:10.191 "transports": [] 00:15:10.191 }, 00:15:10.191 { 00:15:10.191 "name": "nvmf_tgt_poll_group_1", 00:15:10.191 "admin_qpairs": 0, 00:15:10.191 "io_qpairs": 0, 00:15:10.191 "current_admin_qpairs": 0, 00:15:10.191 "current_io_qpairs": 0, 00:15:10.191 "pending_bdev_io": 0, 00:15:10.191 "completed_nvme_io": 0, 00:15:10.191 "transports": [] 00:15:10.191 }, 00:15:10.191 { 00:15:10.191 "name": "nvmf_tgt_poll_group_2", 00:15:10.191 "admin_qpairs": 0, 00:15:10.191 "io_qpairs": 0, 00:15:10.191 "current_admin_qpairs": 0, 00:15:10.191 "current_io_qpairs": 0, 00:15:10.191 "pending_bdev_io": 0, 00:15:10.191 "completed_nvme_io": 0, 00:15:10.191 "transports": [] 00:15:10.191 }, 00:15:10.191 { 00:15:10.191 "name": "nvmf_tgt_poll_group_3", 00:15:10.191 "admin_qpairs": 0, 00:15:10.191 "io_qpairs": 0, 00:15:10.191 "current_admin_qpairs": 0, 00:15:10.191 "current_io_qpairs": 0, 00:15:10.191 "pending_bdev_io": 0, 00:15:10.191 "completed_nvme_io": 0, 00:15:10.191 "transports": [] 00:15:10.191 } 00:15:10.191 ] 00:15:10.191 }' 00:15:10.191 16:05:40 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:15:10.191 16:05:40 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:15:10.191 16:05:40 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:15:10.191 16:05:40 -- target/rpc.sh@15 -- # wc -l 00:15:10.191 16:05:40 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:15:10.191 16:05:40 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:15:10.191 16:05:40 -- target/rpc.sh@29 -- # [[ null == null ]] 00:15:10.191 16:05:40 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:15:10.191 16:05:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.191 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:15:10.191 [2024-11-20 16:05:40.830002] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x128c130/0x1290600) succeed. 00:15:10.191 [2024-11-20 16:05:40.839125] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x128d6d0/0x12d1ca0) succeed. 00:15:10.191 16:05:40 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.191 16:05:40 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:15:10.191 16:05:40 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.191 16:05:40 -- common/autotest_common.sh@10 -- # set +x 00:15:10.452 16:05:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.452 16:05:41 -- target/rpc.sh@33 -- # stats='{ 00:15:10.452 "tick_rate": 2500000000, 00:15:10.452 "poll_groups": [ 00:15:10.452 { 00:15:10.452 "name": "nvmf_tgt_poll_group_0", 00:15:10.452 "admin_qpairs": 0, 00:15:10.452 "io_qpairs": 0, 00:15:10.452 "current_admin_qpairs": 0, 00:15:10.452 "current_io_qpairs": 0, 00:15:10.452 "pending_bdev_io": 0, 00:15:10.452 "completed_nvme_io": 0, 00:15:10.452 "transports": [ 00:15:10.452 { 00:15:10.452 "trtype": "RDMA", 00:15:10.452 "pending_data_buffer": 0, 00:15:10.452 "devices": [ 00:15:10.452 { 00:15:10.452 "name": "mlx5_0", 00:15:10.452 "polls": 15750, 00:15:10.452 "idle_polls": 15750, 00:15:10.452 "completions": 0, 00:15:10.452 "requests": 0, 00:15:10.452 "request_latency": 0, 00:15:10.452 "pending_free_request": 0, 00:15:10.452 "pending_rdma_read": 0, 00:15:10.452 "pending_rdma_write": 0, 00:15:10.452 "pending_rdma_send": 0, 00:15:10.452 "total_send_wrs": 0, 00:15:10.452 "send_doorbell_updates": 0, 00:15:10.452 "total_recv_wrs": 4096, 00:15:10.452 "recv_doorbell_updates": 1 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "mlx5_1", 00:15:10.452 "polls": 15750, 00:15:10.452 "idle_polls": 15750, 00:15:10.452 "completions": 0, 00:15:10.452 "requests": 0, 00:15:10.452 "request_latency": 0, 00:15:10.452 "pending_free_request": 0, 00:15:10.452 "pending_rdma_read": 0, 00:15:10.452 "pending_rdma_write": 0, 00:15:10.452 "pending_rdma_send": 0, 00:15:10.452 "total_send_wrs": 0, 00:15:10.452 "send_doorbell_updates": 0, 00:15:10.452 "total_recv_wrs": 4096, 00:15:10.452 "recv_doorbell_updates": 1 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "nvmf_tgt_poll_group_1", 00:15:10.452 "admin_qpairs": 0, 00:15:10.452 "io_qpairs": 0, 00:15:10.452 "current_admin_qpairs": 0, 00:15:10.452 "current_io_qpairs": 0, 00:15:10.452 "pending_bdev_io": 0, 00:15:10.452 "completed_nvme_io": 0, 00:15:10.452 "transports": [ 00:15:10.452 { 00:15:10.452 "trtype": "RDMA", 00:15:10.452 "pending_data_buffer": 0, 00:15:10.452 "devices": [ 00:15:10.452 { 00:15:10.452 "name": "mlx5_0", 00:15:10.452 "polls": 10114, 00:15:10.452 "idle_polls": 10114, 00:15:10.452 "completions": 0, 00:15:10.452 "requests": 0, 00:15:10.452 "request_latency": 0, 00:15:10.452 "pending_free_request": 0, 00:15:10.452 "pending_rdma_read": 0, 00:15:10.452 "pending_rdma_write": 0, 00:15:10.452 "pending_rdma_send": 0, 00:15:10.452 "total_send_wrs": 0, 00:15:10.452 "send_doorbell_updates": 0, 00:15:10.452 "total_recv_wrs": 4096, 00:15:10.452 "recv_doorbell_updates": 1 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "mlx5_1", 00:15:10.452 "polls": 10114, 00:15:10.452 "idle_polls": 10114, 00:15:10.452 "completions": 0, 00:15:10.452 "requests": 0, 00:15:10.452 "request_latency": 0, 00:15:10.452 "pending_free_request": 0, 00:15:10.452 "pending_rdma_read": 0, 00:15:10.452 "pending_rdma_write": 0, 00:15:10.452 "pending_rdma_send": 0, 00:15:10.452 "total_send_wrs": 0, 00:15:10.452 "send_doorbell_updates": 0, 00:15:10.452 "total_recv_wrs": 4096, 00:15:10.452 "recv_doorbell_updates": 1 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "nvmf_tgt_poll_group_2", 00:15:10.452 "admin_qpairs": 0, 00:15:10.452 "io_qpairs": 0, 00:15:10.452 "current_admin_qpairs": 0, 00:15:10.452 "current_io_qpairs": 0, 00:15:10.452 "pending_bdev_io": 0, 00:15:10.452 "completed_nvme_io": 0, 00:15:10.452 "transports": [ 00:15:10.452 { 00:15:10.452 "trtype": "RDMA", 00:15:10.452 "pending_data_buffer": 0, 00:15:10.452 "devices": [ 00:15:10.452 { 00:15:10.452 "name": "mlx5_0", 00:15:10.452 "polls": 5725, 00:15:10.452 "idle_polls": 5725, 00:15:10.452 "completions": 0, 00:15:10.452 "requests": 0, 00:15:10.452 "request_latency": 0, 00:15:10.452 "pending_free_request": 0, 00:15:10.452 "pending_rdma_read": 0, 00:15:10.452 "pending_rdma_write": 0, 00:15:10.452 "pending_rdma_send": 0, 00:15:10.452 "total_send_wrs": 0, 00:15:10.452 "send_doorbell_updates": 0, 00:15:10.452 "total_recv_wrs": 4096, 00:15:10.452 "recv_doorbell_updates": 1 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "mlx5_1", 00:15:10.452 "polls": 5725, 00:15:10.452 "idle_polls": 5725, 00:15:10.452 "completions": 0, 00:15:10.452 "requests": 0, 00:15:10.452 "request_latency": 0, 00:15:10.452 "pending_free_request": 0, 00:15:10.452 "pending_rdma_read": 0, 00:15:10.452 "pending_rdma_write": 0, 00:15:10.452 "pending_rdma_send": 0, 00:15:10.452 "total_send_wrs": 0, 00:15:10.452 "send_doorbell_updates": 0, 00:15:10.452 "total_recv_wrs": 4096, 00:15:10.452 "recv_doorbell_updates": 1 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "nvmf_tgt_poll_group_3", 00:15:10.452 "admin_qpairs": 0, 00:15:10.452 "io_qpairs": 0, 00:15:10.452 "current_admin_qpairs": 0, 00:15:10.452 "current_io_qpairs": 0, 00:15:10.452 "pending_bdev_io": 0, 00:15:10.452 "completed_nvme_io": 0, 00:15:10.452 "transports": [ 00:15:10.452 { 00:15:10.452 "trtype": "RDMA", 00:15:10.452 "pending_data_buffer": 0, 00:15:10.452 "devices": [ 00:15:10.452 { 00:15:10.452 "name": "mlx5_0", 00:15:10.452 "polls": 910, 00:15:10.452 "idle_polls": 910, 00:15:10.452 "completions": 0, 00:15:10.452 "requests": 0, 00:15:10.452 "request_latency": 0, 00:15:10.452 "pending_free_request": 0, 00:15:10.452 "pending_rdma_read": 0, 00:15:10.452 "pending_rdma_write": 0, 00:15:10.452 "pending_rdma_send": 0, 00:15:10.452 "total_send_wrs": 0, 00:15:10.452 "send_doorbell_updates": 0, 00:15:10.452 "total_recv_wrs": 4096, 00:15:10.452 "recv_doorbell_updates": 1 00:15:10.452 }, 00:15:10.452 { 00:15:10.452 "name": "mlx5_1", 00:15:10.452 "polls": 910, 00:15:10.452 "idle_polls": 910, 00:15:10.452 "completions": 0, 00:15:10.452 "requests": 0, 00:15:10.452 "request_latency": 0, 00:15:10.452 "pending_free_request": 0, 00:15:10.452 "pending_rdma_read": 0, 00:15:10.452 "pending_rdma_write": 0, 00:15:10.452 "pending_rdma_send": 0, 00:15:10.452 "total_send_wrs": 0, 00:15:10.452 "send_doorbell_updates": 0, 00:15:10.452 "total_recv_wrs": 4096, 00:15:10.452 "recv_doorbell_updates": 1 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 } 00:15:10.452 ] 00:15:10.452 }' 00:15:10.452 16:05:41 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:15:10.452 16:05:41 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:10.453 16:05:41 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:10.453 16:05:41 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:10.453 16:05:41 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:15:10.453 16:05:41 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:15:10.453 16:05:41 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:10.453 16:05:41 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:10.453 16:05:41 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:10.453 16:05:41 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:15:10.453 16:05:41 -- target/rpc.sh@38 -- # '[' rdma == rdma ']' 00:15:10.453 16:05:41 -- target/rpc.sh@40 -- # jcount '.poll_groups[0].transports[].trtype' 00:15:10.453 16:05:41 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[].trtype' 00:15:10.453 16:05:41 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[].trtype' 00:15:10.453 16:05:41 -- target/rpc.sh@15 -- # wc -l 00:15:10.453 16:05:41 -- target/rpc.sh@40 -- # (( 1 == 1 )) 00:15:10.453 16:05:41 -- target/rpc.sh@41 -- # jq -r '.poll_groups[0].transports[0].trtype' 00:15:10.453 16:05:41 -- target/rpc.sh@41 -- # transport_type=RDMA 00:15:10.453 16:05:41 -- target/rpc.sh@42 -- # [[ rdma == \r\d\m\a ]] 00:15:10.453 16:05:41 -- target/rpc.sh@43 -- # jcount '.poll_groups[0].transports[0].devices[].name' 00:15:10.453 16:05:41 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[0].transports[0].devices[].name' 00:15:10.453 16:05:41 -- target/rpc.sh@15 -- # jq '.poll_groups[0].transports[0].devices[].name' 00:15:10.453 16:05:41 -- target/rpc.sh@15 -- # wc -l 00:15:10.453 16:05:41 -- target/rpc.sh@43 -- # (( 2 > 0 )) 00:15:10.453 16:05:41 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:15:10.453 16:05:41 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:15:10.453 16:05:41 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:15:10.453 16:05:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.453 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:15:10.453 Malloc1 00:15:10.453 16:05:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.453 16:05:41 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:10.453 16:05:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.453 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:15:10.712 16:05:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.712 16:05:41 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:10.712 16:05:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.712 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:15:10.713 16:05:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.713 16:05:41 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:15:10.713 16:05:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.713 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:15:10.713 16:05:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.713 16:05:41 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:10.713 16:05:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.713 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:15:10.713 [2024-11-20 16:05:41.286253] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:10.713 16:05:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.713 16:05:41 -- target/rpc.sh@58 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:15:10.713 16:05:41 -- common/autotest_common.sh@650 -- # local es=0 00:15:10.713 16:05:41 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:15:10.713 16:05:41 -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:10.713 16:05:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.713 16:05:41 -- common/autotest_common.sh@642 -- # type -t nvme 00:15:10.713 16:05:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.713 16:05:41 -- common/autotest_common.sh@644 -- # type -P nvme 00:15:10.713 16:05:41 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:10.713 16:05:41 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:10.713 16:05:41 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:10.713 16:05:41 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e -a 192.168.100.8 -s 4420 00:15:10.713 [2024-11-20 16:05:41.332072] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:15:10.713 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:10.713 could not add new controller: failed to write to nvme-fabrics device 00:15:10.713 16:05:41 -- common/autotest_common.sh@653 -- # es=1 00:15:10.713 16:05:41 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:10.713 16:05:41 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:10.713 16:05:41 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:10.713 16:05:41 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:10.713 16:05:41 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.713 16:05:41 -- common/autotest_common.sh@10 -- # set +x 00:15:10.713 16:05:41 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.713 16:05:41 -- target/rpc.sh@62 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:11.651 16:05:42 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:15:11.651 16:05:42 -- common/autotest_common.sh@1187 -- # local i=0 00:15:11.651 16:05:42 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:11.652 16:05:42 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:11.652 16:05:42 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:13.559 16:05:44 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:13.819 16:05:44 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:13.819 16:05:44 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:13.819 16:05:44 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:13.819 16:05:44 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:13.819 16:05:44 -- common/autotest_common.sh@1197 -- # return 0 00:15:13.819 16:05:44 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:14.757 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.757 16:05:45 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:14.757 16:05:45 -- common/autotest_common.sh@1208 -- # local i=0 00:15:14.757 16:05:45 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:14.757 16:05:45 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.757 16:05:45 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:14.757 16:05:45 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:14.757 16:05:45 -- common/autotest_common.sh@1220 -- # return 0 00:15:14.757 16:05:45 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:14.757 16:05:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.757 16:05:45 -- common/autotest_common.sh@10 -- # set +x 00:15:14.757 16:05:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.757 16:05:45 -- target/rpc.sh@69 -- # NOT nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:14.757 16:05:45 -- common/autotest_common.sh@650 -- # local es=0 00:15:14.757 16:05:45 -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:14.757 16:05:45 -- common/autotest_common.sh@638 -- # local arg=nvme 00:15:14.757 16:05:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.757 16:05:45 -- common/autotest_common.sh@642 -- # type -t nvme 00:15:14.757 16:05:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.757 16:05:45 -- common/autotest_common.sh@644 -- # type -P nvme 00:15:14.757 16:05:45 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:14.757 16:05:45 -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:15:14.757 16:05:45 -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:15:14.757 16:05:45 -- common/autotest_common.sh@653 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:14.757 [2024-11-20 16:05:45.414278] ctrlr.c: 715:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e' 00:15:14.757 Failed to write to /dev/nvme-fabrics: Input/output error 00:15:14.757 could not add new controller: failed to write to nvme-fabrics device 00:15:14.757 16:05:45 -- common/autotest_common.sh@653 -- # es=1 00:15:14.757 16:05:45 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:14.757 16:05:45 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:14.757 16:05:45 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:14.757 16:05:45 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:15:14.757 16:05:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.757 16:05:45 -- common/autotest_common.sh@10 -- # set +x 00:15:14.757 16:05:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.757 16:05:45 -- target/rpc.sh@73 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:15.694 16:05:46 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:15:15.694 16:05:46 -- common/autotest_common.sh@1187 -- # local i=0 00:15:15.694 16:05:46 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:15.694 16:05:46 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:15.694 16:05:46 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:17.655 16:05:48 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:17.655 16:05:48 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:17.655 16:05:48 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:17.914 16:05:48 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:17.914 16:05:48 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:17.914 16:05:48 -- common/autotest_common.sh@1197 -- # return 0 00:15:17.914 16:05:48 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:18.852 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.852 16:05:49 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:18.852 16:05:49 -- common/autotest_common.sh@1208 -- # local i=0 00:15:18.852 16:05:49 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:18.852 16:05:49 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.852 16:05:49 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:18.852 16:05:49 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:18.852 16:05:49 -- common/autotest_common.sh@1220 -- # return 0 00:15:18.852 16:05:49 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:18.852 16:05:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.852 16:05:49 -- common/autotest_common.sh@10 -- # set +x 00:15:18.852 16:05:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.852 16:05:49 -- target/rpc.sh@81 -- # seq 1 5 00:15:18.852 16:05:49 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:18.852 16:05:49 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:18.852 16:05:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.852 16:05:49 -- common/autotest_common.sh@10 -- # set +x 00:15:18.852 16:05:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.852 16:05:49 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:18.852 16:05:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.852 16:05:49 -- common/autotest_common.sh@10 -- # set +x 00:15:18.852 [2024-11-20 16:05:49.466199] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:18.852 16:05:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.852 16:05:49 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:18.852 16:05:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.852 16:05:49 -- common/autotest_common.sh@10 -- # set +x 00:15:18.852 16:05:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.852 16:05:49 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:18.852 16:05:49 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.852 16:05:49 -- common/autotest_common.sh@10 -- # set +x 00:15:18.852 16:05:49 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.852 16:05:49 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:19.789 16:05:50 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:19.789 16:05:50 -- common/autotest_common.sh@1187 -- # local i=0 00:15:19.789 16:05:50 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:19.789 16:05:50 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:19.789 16:05:50 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:21.694 16:05:52 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:21.694 16:05:52 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:21.694 16:05:52 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:21.694 16:05:52 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:21.694 16:05:52 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:21.694 16:05:52 -- common/autotest_common.sh@1197 -- # return 0 00:15:21.694 16:05:52 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:22.631 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:22.631 16:05:53 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:22.631 16:05:53 -- common/autotest_common.sh@1208 -- # local i=0 00:15:22.631 16:05:53 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:22.631 16:05:53 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.892 16:05:53 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:22.892 16:05:53 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:22.892 16:05:53 -- common/autotest_common.sh@1220 -- # return 0 00:15:22.892 16:05:53 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:22.892 16:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.892 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.892 16:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.892 16:05:53 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:22.892 16:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.892 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.892 16:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.892 16:05:53 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:22.892 16:05:53 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:22.892 16:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.892 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.892 16:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.892 16:05:53 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:22.892 16:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.892 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.892 [2024-11-20 16:05:53.500397] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:22.892 16:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.892 16:05:53 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:22.892 16:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.892 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.892 16:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.892 16:05:53 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:22.892 16:05:53 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.893 16:05:53 -- common/autotest_common.sh@10 -- # set +x 00:15:22.893 16:05:53 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.893 16:05:53 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:23.831 16:05:54 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:23.831 16:05:54 -- common/autotest_common.sh@1187 -- # local i=0 00:15:23.831 16:05:54 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:23.831 16:05:54 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:23.831 16:05:54 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:25.739 16:05:56 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:25.740 16:05:56 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:25.740 16:05:56 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:25.740 16:05:56 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:25.740 16:05:56 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:25.740 16:05:56 -- common/autotest_common.sh@1197 -- # return 0 00:15:25.740 16:05:56 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:26.677 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:26.678 16:05:57 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:26.678 16:05:57 -- common/autotest_common.sh@1208 -- # local i=0 00:15:26.678 16:05:57 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:26.678 16:05:57 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:26.678 16:05:57 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:26.678 16:05:57 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:26.937 16:05:57 -- common/autotest_common.sh@1220 -- # return 0 00:15:26.937 16:05:57 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:26.937 16:05:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.937 16:05:57 -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 16:05:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.937 16:05:57 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:26.937 16:05:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.937 16:05:57 -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 16:05:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.937 16:05:57 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:26.937 16:05:57 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:26.937 16:05:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.937 16:05:57 -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 16:05:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.937 16:05:57 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:26.937 16:05:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.937 16:05:57 -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 [2024-11-20 16:05:57.521865] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:26.937 16:05:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.937 16:05:57 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:26.937 16:05:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.937 16:05:57 -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 16:05:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.937 16:05:57 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:26.937 16:05:57 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.937 16:05:57 -- common/autotest_common.sh@10 -- # set +x 00:15:26.937 16:05:57 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.937 16:05:57 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:27.875 16:05:58 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:27.875 16:05:58 -- common/autotest_common.sh@1187 -- # local i=0 00:15:27.875 16:05:58 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:27.875 16:05:58 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:27.875 16:05:58 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:29.781 16:06:00 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:29.781 16:06:00 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:29.781 16:06:00 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:29.781 16:06:00 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:29.781 16:06:00 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:29.781 16:06:00 -- common/autotest_common.sh@1197 -- # return 0 00:15:29.781 16:06:00 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:30.719 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.719 16:06:01 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:30.719 16:06:01 -- common/autotest_common.sh@1208 -- # local i=0 00:15:30.719 16:06:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:30.719 16:06:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.719 16:06:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:30.719 16:06:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:30.720 16:06:01 -- common/autotest_common.sh@1220 -- # return 0 00:15:30.720 16:06:01 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:30.720 16:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.720 16:06:01 -- common/autotest_common.sh@10 -- # set +x 00:15:30.978 16:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.978 16:06:01 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:30.978 16:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.978 16:06:01 -- common/autotest_common.sh@10 -- # set +x 00:15:30.978 16:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.978 16:06:01 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:30.978 16:06:01 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:30.978 16:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.978 16:06:01 -- common/autotest_common.sh@10 -- # set +x 00:15:30.978 16:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.978 16:06:01 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:30.979 16:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.979 16:06:01 -- common/autotest_common.sh@10 -- # set +x 00:15:30.979 [2024-11-20 16:06:01.553598] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:30.979 16:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.979 16:06:01 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:30.979 16:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.979 16:06:01 -- common/autotest_common.sh@10 -- # set +x 00:15:30.979 16:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.979 16:06:01 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:30.979 16:06:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.979 16:06:01 -- common/autotest_common.sh@10 -- # set +x 00:15:30.979 16:06:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.979 16:06:01 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:31.916 16:06:02 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:31.916 16:06:02 -- common/autotest_common.sh@1187 -- # local i=0 00:15:31.916 16:06:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:31.916 16:06:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:31.916 16:06:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:33.823 16:06:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:33.823 16:06:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:33.823 16:06:04 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:33.823 16:06:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:33.823 16:06:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:33.823 16:06:04 -- common/autotest_common.sh@1197 -- # return 0 00:15:33.823 16:06:04 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:34.762 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.762 16:06:05 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:34.762 16:06:05 -- common/autotest_common.sh@1208 -- # local i=0 00:15:34.762 16:06:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:34.762 16:06:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.762 16:06:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:34.762 16:06:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:34.762 16:06:05 -- common/autotest_common.sh@1220 -- # return 0 00:15:34.762 16:06:05 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:34.762 16:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.762 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:15:35.022 16:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.022 16:06:05 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:35.022 16:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.022 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:15:35.022 16:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.022 16:06:05 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:35.022 16:06:05 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:35.022 16:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.022 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:15:35.022 16:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.022 16:06:05 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:35.022 16:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.022 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:15:35.022 [2024-11-20 16:06:05.587515] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:35.022 16:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.022 16:06:05 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:35.022 16:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.022 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:15:35.022 16:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.022 16:06:05 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:35.022 16:06:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.022 16:06:05 -- common/autotest_common.sh@10 -- # set +x 00:15:35.022 16:06:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.022 16:06:05 -- target/rpc.sh@86 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:15:35.961 16:06:06 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:35.961 16:06:06 -- common/autotest_common.sh@1187 -- # local i=0 00:15:35.961 16:06:06 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:15:35.961 16:06:06 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:15:35.961 16:06:06 -- common/autotest_common.sh@1194 -- # sleep 2 00:15:37.866 16:06:08 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:15:37.866 16:06:08 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:15:37.866 16:06:08 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:15:37.866 16:06:08 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:15:37.866 16:06:08 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:15:37.866 16:06:08 -- common/autotest_common.sh@1197 -- # return 0 00:15:37.866 16:06:08 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:38.805 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:38.805 16:06:09 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:38.805 16:06:09 -- common/autotest_common.sh@1208 -- # local i=0 00:15:38.805 16:06:09 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.805 16:06:09 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:15:38.805 16:06:09 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:15:38.805 16:06:09 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:38.805 16:06:09 -- common/autotest_common.sh@1220 -- # return 0 00:15:38.805 16:06:09 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:38.805 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.805 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:38.805 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.805 16:06:09 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:38.805 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.805 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.065 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.065 16:06:09 -- target/rpc.sh@99 -- # seq 1 5 00:15:39.065 16:06:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:39.065 16:06:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:39.065 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.065 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.065 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 [2024-11-20 16:06:09.634871] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:39.066 16:06:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 [2024-11-20 16:06:09.687088] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:39.066 16:06:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 [2024-11-20 16:06:09.735251] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:39.066 16:06:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 [2024-11-20 16:06:09.783424] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:39.066 16:06:09 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 [2024-11-20 16:06:09.835702] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.066 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.066 16:06:09 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:39.066 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.066 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.326 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.326 16:06:09 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:39.326 16:06:09 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.326 16:06:09 -- common/autotest_common.sh@10 -- # set +x 00:15:39.326 16:06:09 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.326 16:06:09 -- target/rpc.sh@110 -- # stats='{ 00:15:39.326 "tick_rate": 2500000000, 00:15:39.326 "poll_groups": [ 00:15:39.326 { 00:15:39.326 "name": "nvmf_tgt_poll_group_0", 00:15:39.326 "admin_qpairs": 2, 00:15:39.326 "io_qpairs": 27, 00:15:39.326 "current_admin_qpairs": 0, 00:15:39.326 "current_io_qpairs": 0, 00:15:39.326 "pending_bdev_io": 0, 00:15:39.326 "completed_nvme_io": 177, 00:15:39.326 "transports": [ 00:15:39.326 { 00:15:39.326 "trtype": "RDMA", 00:15:39.326 "pending_data_buffer": 0, 00:15:39.326 "devices": [ 00:15:39.326 { 00:15:39.326 "name": "mlx5_0", 00:15:39.326 "polls": 3444115, 00:15:39.326 "idle_polls": 3443715, 00:15:39.326 "completions": 461, 00:15:39.326 "requests": 230, 00:15:39.326 "request_latency": 48165962, 00:15:39.326 "pending_free_request": 0, 00:15:39.326 "pending_rdma_read": 0, 00:15:39.326 "pending_rdma_write": 0, 00:15:39.326 "pending_rdma_send": 0, 00:15:39.326 "total_send_wrs": 405, 00:15:39.326 "send_doorbell_updates": 196, 00:15:39.326 "total_recv_wrs": 4326, 00:15:39.326 "recv_doorbell_updates": 196 00:15:39.326 }, 00:15:39.326 { 00:15:39.326 "name": "mlx5_1", 00:15:39.326 "polls": 3444115, 00:15:39.326 "idle_polls": 3444115, 00:15:39.326 "completions": 0, 00:15:39.326 "requests": 0, 00:15:39.326 "request_latency": 0, 00:15:39.326 "pending_free_request": 0, 00:15:39.326 "pending_rdma_read": 0, 00:15:39.326 "pending_rdma_write": 0, 00:15:39.326 "pending_rdma_send": 0, 00:15:39.326 "total_send_wrs": 0, 00:15:39.326 "send_doorbell_updates": 0, 00:15:39.326 "total_recv_wrs": 4096, 00:15:39.326 "recv_doorbell_updates": 1 00:15:39.326 } 00:15:39.326 ] 00:15:39.326 } 00:15:39.326 ] 00:15:39.326 }, 00:15:39.326 { 00:15:39.326 "name": "nvmf_tgt_poll_group_1", 00:15:39.326 "admin_qpairs": 2, 00:15:39.326 "io_qpairs": 26, 00:15:39.326 "current_admin_qpairs": 0, 00:15:39.326 "current_io_qpairs": 0, 00:15:39.326 "pending_bdev_io": 0, 00:15:39.326 "completed_nvme_io": 27, 00:15:39.326 "transports": [ 00:15:39.326 { 00:15:39.326 "trtype": "RDMA", 00:15:39.326 "pending_data_buffer": 0, 00:15:39.326 "devices": [ 00:15:39.326 { 00:15:39.326 "name": "mlx5_0", 00:15:39.326 "polls": 3396965, 00:15:39.326 "idle_polls": 3396805, 00:15:39.326 "completions": 160, 00:15:39.326 "requests": 80, 00:15:39.326 "request_latency": 7774054, 00:15:39.326 "pending_free_request": 0, 00:15:39.326 "pending_rdma_read": 0, 00:15:39.326 "pending_rdma_write": 0, 00:15:39.326 "pending_rdma_send": 0, 00:15:39.326 "total_send_wrs": 106, 00:15:39.326 "send_doorbell_updates": 80, 00:15:39.326 "total_recv_wrs": 4176, 00:15:39.326 "recv_doorbell_updates": 81 00:15:39.326 }, 00:15:39.326 { 00:15:39.326 "name": "mlx5_1", 00:15:39.326 "polls": 3396965, 00:15:39.326 "idle_polls": 3396965, 00:15:39.326 "completions": 0, 00:15:39.326 "requests": 0, 00:15:39.326 "request_latency": 0, 00:15:39.326 "pending_free_request": 0, 00:15:39.326 "pending_rdma_read": 0, 00:15:39.326 "pending_rdma_write": 0, 00:15:39.326 "pending_rdma_send": 0, 00:15:39.326 "total_send_wrs": 0, 00:15:39.326 "send_doorbell_updates": 0, 00:15:39.326 "total_recv_wrs": 4096, 00:15:39.326 "recv_doorbell_updates": 1 00:15:39.326 } 00:15:39.326 ] 00:15:39.326 } 00:15:39.326 ] 00:15:39.326 }, 00:15:39.326 { 00:15:39.326 "name": "nvmf_tgt_poll_group_2", 00:15:39.326 "admin_qpairs": 1, 00:15:39.326 "io_qpairs": 26, 00:15:39.326 "current_admin_qpairs": 0, 00:15:39.326 "current_io_qpairs": 0, 00:15:39.326 "pending_bdev_io": 0, 00:15:39.326 "completed_nvme_io": 76, 00:15:39.326 "transports": [ 00:15:39.326 { 00:15:39.326 "trtype": "RDMA", 00:15:39.326 "pending_data_buffer": 0, 00:15:39.326 "devices": [ 00:15:39.326 { 00:15:39.326 "name": "mlx5_0", 00:15:39.326 "polls": 3466189, 00:15:39.326 "idle_polls": 3465999, 00:15:39.326 "completions": 207, 00:15:39.326 "requests": 103, 00:15:39.326 "request_latency": 19201166, 00:15:39.326 "pending_free_request": 0, 00:15:39.326 "pending_rdma_read": 0, 00:15:39.326 "pending_rdma_write": 0, 00:15:39.326 "pending_rdma_send": 0, 00:15:39.326 "total_send_wrs": 166, 00:15:39.326 "send_doorbell_updates": 93, 00:15:39.326 "total_recv_wrs": 4199, 00:15:39.326 "recv_doorbell_updates": 93 00:15:39.326 }, 00:15:39.326 { 00:15:39.326 "name": "mlx5_1", 00:15:39.326 "polls": 3466189, 00:15:39.326 "idle_polls": 3466189, 00:15:39.326 "completions": 0, 00:15:39.326 "requests": 0, 00:15:39.326 "request_latency": 0, 00:15:39.326 "pending_free_request": 0, 00:15:39.326 "pending_rdma_read": 0, 00:15:39.326 "pending_rdma_write": 0, 00:15:39.326 "pending_rdma_send": 0, 00:15:39.326 "total_send_wrs": 0, 00:15:39.326 "send_doorbell_updates": 0, 00:15:39.326 "total_recv_wrs": 4096, 00:15:39.326 "recv_doorbell_updates": 1 00:15:39.326 } 00:15:39.326 ] 00:15:39.326 } 00:15:39.327 ] 00:15:39.327 }, 00:15:39.327 { 00:15:39.327 "name": "nvmf_tgt_poll_group_3", 00:15:39.327 "admin_qpairs": 2, 00:15:39.327 "io_qpairs": 26, 00:15:39.327 "current_admin_qpairs": 0, 00:15:39.327 "current_io_qpairs": 0, 00:15:39.327 "pending_bdev_io": 0, 00:15:39.327 "completed_nvme_io": 175, 00:15:39.327 "transports": [ 00:15:39.327 { 00:15:39.327 "trtype": "RDMA", 00:15:39.327 "pending_data_buffer": 0, 00:15:39.327 "devices": [ 00:15:39.327 { 00:15:39.327 "name": "mlx5_0", 00:15:39.327 "polls": 2654247, 00:15:39.327 "idle_polls": 2653849, 00:15:39.327 "completions": 460, 00:15:39.327 "requests": 230, 00:15:39.327 "request_latency": 50042180, 00:15:39.327 "pending_free_request": 0, 00:15:39.327 "pending_rdma_read": 0, 00:15:39.327 "pending_rdma_write": 0, 00:15:39.327 "pending_rdma_send": 0, 00:15:39.327 "total_send_wrs": 405, 00:15:39.327 "send_doorbell_updates": 195, 00:15:39.327 "total_recv_wrs": 4326, 00:15:39.327 "recv_doorbell_updates": 196 00:15:39.327 }, 00:15:39.327 { 00:15:39.327 "name": "mlx5_1", 00:15:39.327 "polls": 2654247, 00:15:39.327 "idle_polls": 2654247, 00:15:39.327 "completions": 0, 00:15:39.327 "requests": 0, 00:15:39.327 "request_latency": 0, 00:15:39.327 "pending_free_request": 0, 00:15:39.327 "pending_rdma_read": 0, 00:15:39.327 "pending_rdma_write": 0, 00:15:39.327 "pending_rdma_send": 0, 00:15:39.327 "total_send_wrs": 0, 00:15:39.327 "send_doorbell_updates": 0, 00:15:39.327 "total_recv_wrs": 4096, 00:15:39.327 "recv_doorbell_updates": 1 00:15:39.327 } 00:15:39.327 ] 00:15:39.327 } 00:15:39.327 ] 00:15:39.327 } 00:15:39.327 ] 00:15:39.327 }' 00:15:39.327 16:06:09 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:39.327 16:06:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:39.327 16:06:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:39.327 16:06:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:39.327 16:06:09 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:39.327 16:06:09 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:39.327 16:06:09 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:39.327 16:06:09 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:39.327 16:06:09 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:39.327 16:06:10 -- target/rpc.sh@113 -- # (( 105 > 0 )) 00:15:39.327 16:06:10 -- target/rpc.sh@115 -- # '[' rdma == rdma ']' 00:15:39.327 16:06:10 -- target/rpc.sh@117 -- # jsum '.poll_groups[].transports[].devices[].completions' 00:15:39.327 16:06:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].completions' 00:15:39.327 16:06:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].completions' 00:15:39.327 16:06:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:39.327 16:06:10 -- target/rpc.sh@117 -- # (( 1288 > 0 )) 00:15:39.327 16:06:10 -- target/rpc.sh@118 -- # jsum '.poll_groups[].transports[].devices[].request_latency' 00:15:39.327 16:06:10 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].transports[].devices[].request_latency' 00:15:39.327 16:06:10 -- target/rpc.sh@20 -- # jq '.poll_groups[].transports[].devices[].request_latency' 00:15:39.327 16:06:10 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:39.327 16:06:10 -- target/rpc.sh@118 -- # (( 125183362 > 0 )) 00:15:39.327 16:06:10 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:39.327 16:06:10 -- target/rpc.sh@123 -- # nvmftestfini 00:15:39.327 16:06:10 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:39.327 16:06:10 -- nvmf/common.sh@116 -- # sync 00:15:39.327 16:06:10 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:39.327 16:06:10 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:39.327 16:06:10 -- nvmf/common.sh@119 -- # set +e 00:15:39.327 16:06:10 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:39.327 16:06:10 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:39.327 rmmod nvme_rdma 00:15:39.587 rmmod nvme_fabrics 00:15:39.587 16:06:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:39.587 16:06:10 -- nvmf/common.sh@123 -- # set -e 00:15:39.587 16:06:10 -- nvmf/common.sh@124 -- # return 0 00:15:39.587 16:06:10 -- nvmf/common.sh@477 -- # '[' -n 1292997 ']' 00:15:39.587 16:06:10 -- nvmf/common.sh@478 -- # killprocess 1292997 00:15:39.587 16:06:10 -- common/autotest_common.sh@936 -- # '[' -z 1292997 ']' 00:15:39.587 16:06:10 -- common/autotest_common.sh@940 -- # kill -0 1292997 00:15:39.587 16:06:10 -- common/autotest_common.sh@941 -- # uname 00:15:39.587 16:06:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:39.587 16:06:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1292997 00:15:39.587 16:06:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:39.587 16:06:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:39.587 16:06:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1292997' 00:15:39.587 killing process with pid 1292997 00:15:39.587 16:06:10 -- common/autotest_common.sh@955 -- # kill 1292997 00:15:39.587 16:06:10 -- common/autotest_common.sh@960 -- # wait 1292997 00:15:39.846 16:06:10 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:39.846 16:06:10 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:39.846 00:15:39.846 real 0m37.543s 00:15:39.846 user 2m4.082s 00:15:39.846 sys 0m6.745s 00:15:39.846 16:06:10 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:39.846 16:06:10 -- common/autotest_common.sh@10 -- # set +x 00:15:39.846 ************************************ 00:15:39.846 END TEST nvmf_rpc 00:15:39.846 ************************************ 00:15:39.846 16:06:10 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:39.846 16:06:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:39.846 16:06:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:39.846 16:06:10 -- common/autotest_common.sh@10 -- # set +x 00:15:39.846 ************************************ 00:15:39.846 START TEST nvmf_invalid 00:15:39.846 ************************************ 00:15:39.846 16:06:10 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=rdma 00:15:39.846 * Looking for test storage... 00:15:39.846 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:39.846 16:06:10 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:39.846 16:06:10 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:39.846 16:06:10 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:40.107 16:06:10 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:40.107 16:06:10 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:40.107 16:06:10 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:40.107 16:06:10 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:40.107 16:06:10 -- scripts/common.sh@335 -- # IFS=.-: 00:15:40.107 16:06:10 -- scripts/common.sh@335 -- # read -ra ver1 00:15:40.107 16:06:10 -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.107 16:06:10 -- scripts/common.sh@336 -- # read -ra ver2 00:15:40.107 16:06:10 -- scripts/common.sh@337 -- # local 'op=<' 00:15:40.107 16:06:10 -- scripts/common.sh@339 -- # ver1_l=2 00:15:40.107 16:06:10 -- scripts/common.sh@340 -- # ver2_l=1 00:15:40.107 16:06:10 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:40.107 16:06:10 -- scripts/common.sh@343 -- # case "$op" in 00:15:40.107 16:06:10 -- scripts/common.sh@344 -- # : 1 00:15:40.107 16:06:10 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:40.107 16:06:10 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.107 16:06:10 -- scripts/common.sh@364 -- # decimal 1 00:15:40.107 16:06:10 -- scripts/common.sh@352 -- # local d=1 00:15:40.107 16:06:10 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.107 16:06:10 -- scripts/common.sh@354 -- # echo 1 00:15:40.107 16:06:10 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:40.107 16:06:10 -- scripts/common.sh@365 -- # decimal 2 00:15:40.107 16:06:10 -- scripts/common.sh@352 -- # local d=2 00:15:40.107 16:06:10 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.107 16:06:10 -- scripts/common.sh@354 -- # echo 2 00:15:40.107 16:06:10 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:40.107 16:06:10 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:40.107 16:06:10 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:40.107 16:06:10 -- scripts/common.sh@367 -- # return 0 00:15:40.107 16:06:10 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.107 16:06:10 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:40.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.107 --rc genhtml_branch_coverage=1 00:15:40.107 --rc genhtml_function_coverage=1 00:15:40.107 --rc genhtml_legend=1 00:15:40.107 --rc geninfo_all_blocks=1 00:15:40.107 --rc geninfo_unexecuted_blocks=1 00:15:40.107 00:15:40.107 ' 00:15:40.107 16:06:10 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:40.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.107 --rc genhtml_branch_coverage=1 00:15:40.107 --rc genhtml_function_coverage=1 00:15:40.107 --rc genhtml_legend=1 00:15:40.107 --rc geninfo_all_blocks=1 00:15:40.107 --rc geninfo_unexecuted_blocks=1 00:15:40.107 00:15:40.107 ' 00:15:40.107 16:06:10 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:40.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.107 --rc genhtml_branch_coverage=1 00:15:40.107 --rc genhtml_function_coverage=1 00:15:40.107 --rc genhtml_legend=1 00:15:40.107 --rc geninfo_all_blocks=1 00:15:40.107 --rc geninfo_unexecuted_blocks=1 00:15:40.107 00:15:40.107 ' 00:15:40.107 16:06:10 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:40.107 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.107 --rc genhtml_branch_coverage=1 00:15:40.107 --rc genhtml_function_coverage=1 00:15:40.107 --rc genhtml_legend=1 00:15:40.107 --rc geninfo_all_blocks=1 00:15:40.107 --rc geninfo_unexecuted_blocks=1 00:15:40.107 00:15:40.107 ' 00:15:40.107 16:06:10 -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.107 16:06:10 -- nvmf/common.sh@7 -- # uname -s 00:15:40.107 16:06:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.107 16:06:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.107 16:06:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.107 16:06:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.107 16:06:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.107 16:06:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.107 16:06:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.107 16:06:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.107 16:06:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.107 16:06:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.107 16:06:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:40.107 16:06:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:40.107 16:06:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.107 16:06:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.107 16:06:10 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.107 16:06:10 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:40.107 16:06:10 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.107 16:06:10 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.107 16:06:10 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.107 16:06:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.107 16:06:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.107 16:06:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.107 16:06:10 -- paths/export.sh@5 -- # export PATH 00:15:40.107 16:06:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.107 16:06:10 -- nvmf/common.sh@46 -- # : 0 00:15:40.107 16:06:10 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:40.107 16:06:10 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:40.107 16:06:10 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:40.107 16:06:10 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.107 16:06:10 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.107 16:06:10 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:40.107 16:06:10 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:40.107 16:06:10 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:40.107 16:06:10 -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:40.107 16:06:10 -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:15:40.107 16:06:10 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:40.107 16:06:10 -- target/invalid.sh@14 -- # target=foobar 00:15:40.107 16:06:10 -- target/invalid.sh@16 -- # RANDOM=0 00:15:40.107 16:06:10 -- target/invalid.sh@34 -- # nvmftestinit 00:15:40.107 16:06:10 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:40.107 16:06:10 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.107 16:06:10 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:40.107 16:06:10 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:40.107 16:06:10 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:40.107 16:06:10 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.107 16:06:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:40.107 16:06:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.107 16:06:10 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:40.107 16:06:10 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:40.107 16:06:10 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:40.107 16:06:10 -- common/autotest_common.sh@10 -- # set +x 00:15:46.682 16:06:17 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:46.682 16:06:17 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:46.682 16:06:17 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:46.682 16:06:17 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:46.682 16:06:17 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:46.682 16:06:17 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:46.682 16:06:17 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:46.682 16:06:17 -- nvmf/common.sh@294 -- # net_devs=() 00:15:46.682 16:06:17 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:46.682 16:06:17 -- nvmf/common.sh@295 -- # e810=() 00:15:46.682 16:06:17 -- nvmf/common.sh@295 -- # local -ga e810 00:15:46.682 16:06:17 -- nvmf/common.sh@296 -- # x722=() 00:15:46.682 16:06:17 -- nvmf/common.sh@296 -- # local -ga x722 00:15:46.682 16:06:17 -- nvmf/common.sh@297 -- # mlx=() 00:15:46.682 16:06:17 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:46.682 16:06:17 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.682 16:06:17 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:46.682 16:06:17 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:46.682 16:06:17 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:46.682 16:06:17 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:46.682 16:06:17 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:46.682 16:06:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:46.682 16:06:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:46.682 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:46.682 16:06:17 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:46.682 16:06:17 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:46.682 16:06:17 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:46.682 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:46.682 16:06:17 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:46.682 16:06:17 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:46.682 16:06:17 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:46.682 16:06:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.682 16:06:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:46.682 16:06:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.682 16:06:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:46.682 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:46.682 16:06:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.682 16:06:17 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:46.682 16:06:17 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.682 16:06:17 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:46.682 16:06:17 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.682 16:06:17 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:46.682 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:46.682 16:06:17 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.682 16:06:17 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:46.682 16:06:17 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:46.682 16:06:17 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:46.682 16:06:17 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:46.682 16:06:17 -- nvmf/common.sh@57 -- # uname 00:15:46.682 16:06:17 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:46.682 16:06:17 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:46.682 16:06:17 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:46.682 16:06:17 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:46.682 16:06:17 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:46.682 16:06:17 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:46.682 16:06:17 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:46.682 16:06:17 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:46.682 16:06:17 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:46.682 16:06:17 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:46.682 16:06:17 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:46.682 16:06:17 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:46.682 16:06:17 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:46.682 16:06:17 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:46.682 16:06:17 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:46.682 16:06:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:46.682 16:06:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:46.682 16:06:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.682 16:06:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:46.682 16:06:17 -- nvmf/common.sh@104 -- # continue 2 00:15:46.682 16:06:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:46.682 16:06:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.682 16:06:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.682 16:06:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:46.682 16:06:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:46.682 16:06:17 -- nvmf/common.sh@104 -- # continue 2 00:15:46.682 16:06:17 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:46.682 16:06:17 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:46.682 16:06:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:46.683 16:06:17 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:46.683 16:06:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:46.683 16:06:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:46.683 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:46.683 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:46.683 altname enp217s0f0np0 00:15:46.683 altname ens818f0np0 00:15:46.683 inet 192.168.100.8/24 scope global mlx_0_0 00:15:46.683 valid_lft forever preferred_lft forever 00:15:46.683 16:06:17 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:46.683 16:06:17 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:46.683 16:06:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:46.683 16:06:17 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:46.683 16:06:17 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:46.683 16:06:17 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:46.683 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:46.683 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:46.683 altname enp217s0f1np1 00:15:46.683 altname ens818f1np1 00:15:46.683 inet 192.168.100.9/24 scope global mlx_0_1 00:15:46.683 valid_lft forever preferred_lft forever 00:15:46.683 16:06:17 -- nvmf/common.sh@410 -- # return 0 00:15:46.683 16:06:17 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:46.683 16:06:17 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:46.683 16:06:17 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:46.683 16:06:17 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:46.683 16:06:17 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:46.683 16:06:17 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:46.683 16:06:17 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:46.683 16:06:17 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:46.683 16:06:17 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:46.683 16:06:17 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:46.683 16:06:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:46.683 16:06:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.683 16:06:17 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:46.683 16:06:17 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:46.683 16:06:17 -- nvmf/common.sh@104 -- # continue 2 00:15:46.683 16:06:17 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:46.683 16:06:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.683 16:06:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:46.683 16:06:17 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:46.683 16:06:17 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:46.683 16:06:17 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:46.683 16:06:17 -- nvmf/common.sh@104 -- # continue 2 00:15:46.683 16:06:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:46.683 16:06:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:46.683 16:06:17 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:46.683 16:06:17 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:46.683 16:06:17 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:46.683 16:06:17 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:46.683 16:06:17 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:46.683 16:06:17 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:46.683 192.168.100.9' 00:15:46.683 16:06:17 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:46.683 192.168.100.9' 00:15:46.683 16:06:17 -- nvmf/common.sh@445 -- # head -n 1 00:15:46.683 16:06:17 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:46.683 16:06:17 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:46.683 192.168.100.9' 00:15:46.683 16:06:17 -- nvmf/common.sh@446 -- # tail -n +2 00:15:46.683 16:06:17 -- nvmf/common.sh@446 -- # head -n 1 00:15:46.683 16:06:17 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:46.683 16:06:17 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:46.683 16:06:17 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:46.683 16:06:17 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:46.683 16:06:17 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:46.683 16:06:17 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:46.683 16:06:17 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:46.683 16:06:17 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:46.683 16:06:17 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:46.683 16:06:17 -- common/autotest_common.sh@10 -- # set +x 00:15:46.683 16:06:17 -- nvmf/common.sh@469 -- # nvmfpid=1302191 00:15:46.683 16:06:17 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:46.683 16:06:17 -- nvmf/common.sh@470 -- # waitforlisten 1302191 00:15:46.683 16:06:17 -- common/autotest_common.sh@829 -- # '[' -z 1302191 ']' 00:15:46.683 16:06:17 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.683 16:06:17 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.683 16:06:17 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.683 16:06:17 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.683 16:06:17 -- common/autotest_common.sh@10 -- # set +x 00:15:46.683 [2024-11-20 16:06:17.380787] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:46.683 [2024-11-20 16:06:17.380836] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.683 EAL: No free 2048 kB hugepages reported on node 1 00:15:46.683 [2024-11-20 16:06:17.450545] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:46.943 [2024-11-20 16:06:17.488288] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:46.943 [2024-11-20 16:06:17.488403] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.943 [2024-11-20 16:06:17.488417] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.943 [2024-11-20 16:06:17.488427] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.943 [2024-11-20 16:06:17.488484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.943 [2024-11-20 16:06:17.488585] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:46.943 [2024-11-20 16:06:17.488608] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:46.943 [2024-11-20 16:06:17.488610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.511 16:06:18 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:47.511 16:06:18 -- common/autotest_common.sh@862 -- # return 0 00:15:47.511 16:06:18 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:47.511 16:06:18 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:47.511 16:06:18 -- common/autotest_common.sh@10 -- # set +x 00:15:47.511 16:06:18 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:47.511 16:06:18 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:47.511 16:06:18 -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode3414 00:15:47.770 [2024-11-20 16:06:18.410487] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:47.770 16:06:18 -- target/invalid.sh@40 -- # out='request: 00:15:47.770 { 00:15:47.770 "nqn": "nqn.2016-06.io.spdk:cnode3414", 00:15:47.770 "tgt_name": "foobar", 00:15:47.770 "method": "nvmf_create_subsystem", 00:15:47.770 "req_id": 1 00:15:47.770 } 00:15:47.770 Got JSON-RPC error response 00:15:47.770 response: 00:15:47.770 { 00:15:47.770 "code": -32603, 00:15:47.770 "message": "Unable to find target foobar" 00:15:47.770 }' 00:15:47.770 16:06:18 -- target/invalid.sh@41 -- # [[ request: 00:15:47.770 { 00:15:47.770 "nqn": "nqn.2016-06.io.spdk:cnode3414", 00:15:47.770 "tgt_name": "foobar", 00:15:47.770 "method": "nvmf_create_subsystem", 00:15:47.770 "req_id": 1 00:15:47.770 } 00:15:47.770 Got JSON-RPC error response 00:15:47.770 response: 00:15:47.770 { 00:15:47.770 "code": -32603, 00:15:47.770 "message": "Unable to find target foobar" 00:15:47.770 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:47.771 16:06:18 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:47.771 16:06:18 -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode11075 00:15:48.030 [2024-11-20 16:06:18.607188] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11075: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:48.030 16:06:18 -- target/invalid.sh@45 -- # out='request: 00:15:48.030 { 00:15:48.030 "nqn": "nqn.2016-06.io.spdk:cnode11075", 00:15:48.030 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.030 "method": "nvmf_create_subsystem", 00:15:48.030 "req_id": 1 00:15:48.030 } 00:15:48.030 Got JSON-RPC error response 00:15:48.030 response: 00:15:48.030 { 00:15:48.030 "code": -32602, 00:15:48.030 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.030 }' 00:15:48.030 16:06:18 -- target/invalid.sh@46 -- # [[ request: 00:15:48.030 { 00:15:48.030 "nqn": "nqn.2016-06.io.spdk:cnode11075", 00:15:48.030 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:48.030 "method": "nvmf_create_subsystem", 00:15:48.030 "req_id": 1 00:15:48.030 } 00:15:48.030 Got JSON-RPC error response 00:15:48.030 response: 00:15:48.030 { 00:15:48.030 "code": -32602, 00:15:48.030 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:48.030 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:48.030 16:06:18 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:48.030 16:06:18 -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode32230 00:15:48.031 [2024-11-20 16:06:18.803802] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode32230: invalid model number 'SPDK_Controller' 00:15:48.290 16:06:18 -- target/invalid.sh@50 -- # out='request: 00:15:48.290 { 00:15:48.290 "nqn": "nqn.2016-06.io.spdk:cnode32230", 00:15:48.290 "model_number": "SPDK_Controller\u001f", 00:15:48.290 "method": "nvmf_create_subsystem", 00:15:48.290 "req_id": 1 00:15:48.290 } 00:15:48.290 Got JSON-RPC error response 00:15:48.290 response: 00:15:48.290 { 00:15:48.290 "code": -32602, 00:15:48.290 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.290 }' 00:15:48.290 16:06:18 -- target/invalid.sh@51 -- # [[ request: 00:15:48.290 { 00:15:48.290 "nqn": "nqn.2016-06.io.spdk:cnode32230", 00:15:48.291 "model_number": "SPDK_Controller\u001f", 00:15:48.291 "method": "nvmf_create_subsystem", 00:15:48.291 "req_id": 1 00:15:48.291 } 00:15:48.291 Got JSON-RPC error response 00:15:48.291 response: 00:15:48.291 { 00:15:48.291 "code": -32602, 00:15:48.291 "message": "Invalid MN SPDK_Controller\u001f" 00:15:48.291 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:48.291 16:06:18 -- target/invalid.sh@54 -- # gen_random_s 21 00:15:48.291 16:06:18 -- target/invalid.sh@19 -- # local length=21 ll 00:15:48.291 16:06:18 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:48.291 16:06:18 -- target/invalid.sh@21 -- # local chars 00:15:48.291 16:06:18 -- target/invalid.sh@22 -- # local string 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 90 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=Z 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 120 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=x 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 57 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x39' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=9 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 104 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x68' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=h 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 48 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=0 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 122 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=z 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 51 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x33' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=3 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 58 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x3a' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=: 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 44 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x2c' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=, 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 103 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=g 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 93 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=']' 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 98 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=b 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 91 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+='[' 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 60 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+='<' 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 101 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=e 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 118 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x76' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=v 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 93 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x5d' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=']' 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 112 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=p 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 42 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+='*' 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 120 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+=x 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # printf %x 96 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:48.291 16:06:18 -- target/invalid.sh@25 -- # string+='`' 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll++ )) 00:15:48.291 16:06:18 -- target/invalid.sh@24 -- # (( ll < length )) 00:15:48.291 16:06:18 -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:15:48.291 16:06:18 -- target/invalid.sh@31 -- # echo 'Zx9h0z3:,g]b[OklF$m!yFGis`K6t6R' 00:15:48.813 16:06:19 -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '=/b;{5=K!kP8P~sz%2;a20>OklF$m!yFGis`K6t6R' nqn.2016-06.io.spdk:cnode26028 00:15:49.072 [2024-11-20 16:06:19.670774] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26028: invalid model number '=/b;{5=K!kP8P~sz%2;a20>OklF$m!yFGis`K6t6R' 00:15:49.072 16:06:19 -- target/invalid.sh@58 -- # out='request: 00:15:49.072 { 00:15:49.072 "nqn": "nqn.2016-06.io.spdk:cnode26028", 00:15:49.072 "model_number": "=/b;{5=K!kP8P~sz%2;a20>OklF$m!yFGis`K6t6R", 00:15:49.072 "method": "nvmf_create_subsystem", 00:15:49.072 "req_id": 1 00:15:49.072 } 00:15:49.072 Got JSON-RPC error response 00:15:49.072 response: 00:15:49.072 { 00:15:49.072 "code": -32602, 00:15:49.072 "message": "Invalid MN =/b;{5=K!kP8P~sz%2;a20>OklF$m!yFGis`K6t6R" 00:15:49.072 }' 00:15:49.072 16:06:19 -- target/invalid.sh@59 -- # [[ request: 00:15:49.072 { 00:15:49.072 "nqn": "nqn.2016-06.io.spdk:cnode26028", 00:15:49.072 "model_number": "=/b;{5=K!kP8P~sz%2;a20>OklF$m!yFGis`K6t6R", 00:15:49.072 "method": "nvmf_create_subsystem", 00:15:49.072 "req_id": 1 00:15:49.072 } 00:15:49.072 Got JSON-RPC error response 00:15:49.072 response: 00:15:49.072 { 00:15:49.072 "code": -32602, 00:15:49.072 "message": "Invalid MN =/b;{5=K!kP8P~sz%2;a20>OklF$m!yFGis`K6t6R" 00:15:49.072 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:49.072 16:06:19 -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype rdma 00:15:49.333 [2024-11-20 16:06:19.885145] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a369b0/0x1a3aea0) succeed. 00:15:49.333 [2024-11-20 16:06:19.894374] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a37f50/0x1a7c540) succeed. 00:15:49.333 16:06:20 -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:15:49.593 16:06:20 -- target/invalid.sh@64 -- # [[ rdma == \T\C\P ]] 00:15:49.593 16:06:20 -- target/invalid.sh@67 -- # echo '192.168.100.8 00:15:49.593 192.168.100.9' 00:15:49.593 16:06:20 -- target/invalid.sh@67 -- # head -n 1 00:15:49.593 16:06:20 -- target/invalid.sh@67 -- # IP=192.168.100.8 00:15:49.593 16:06:20 -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t rdma -a 192.168.100.8 -s 4421 00:15:49.853 [2024-11-20 16:06:20.407420] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:15:49.853 16:06:20 -- target/invalid.sh@69 -- # out='request: 00:15:49.853 { 00:15:49.853 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:49.853 "listen_address": { 00:15:49.853 "trtype": "rdma", 00:15:49.853 "traddr": "192.168.100.8", 00:15:49.853 "trsvcid": "4421" 00:15:49.853 }, 00:15:49.853 "method": "nvmf_subsystem_remove_listener", 00:15:49.853 "req_id": 1 00:15:49.853 } 00:15:49.853 Got JSON-RPC error response 00:15:49.853 response: 00:15:49.853 { 00:15:49.853 "code": -32602, 00:15:49.853 "message": "Invalid parameters" 00:15:49.853 }' 00:15:49.853 16:06:20 -- target/invalid.sh@70 -- # [[ request: 00:15:49.853 { 00:15:49.853 "nqn": "nqn.2016-06.io.spdk:cnode", 00:15:49.853 "listen_address": { 00:15:49.853 "trtype": "rdma", 00:15:49.853 "traddr": "192.168.100.8", 00:15:49.853 "trsvcid": "4421" 00:15:49.853 }, 00:15:49.853 "method": "nvmf_subsystem_remove_listener", 00:15:49.853 "req_id": 1 00:15:49.853 } 00:15:49.853 Got JSON-RPC error response 00:15:49.853 response: 00:15:49.853 { 00:15:49.853 "code": -32602, 00:15:49.853 "message": "Invalid parameters" 00:15:49.853 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:15:49.853 16:06:20 -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26453 -i 0 00:15:49.853 [2024-11-20 16:06:20.604112] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26453: invalid cntlid range [0-65519] 00:15:49.853 16:06:20 -- target/invalid.sh@73 -- # out='request: 00:15:49.853 { 00:15:49.853 "nqn": "nqn.2016-06.io.spdk:cnode26453", 00:15:49.853 "min_cntlid": 0, 00:15:49.853 "method": "nvmf_create_subsystem", 00:15:49.853 "req_id": 1 00:15:49.853 } 00:15:49.853 Got JSON-RPC error response 00:15:49.853 response: 00:15:49.853 { 00:15:49.853 "code": -32602, 00:15:49.853 "message": "Invalid cntlid range [0-65519]" 00:15:49.853 }' 00:15:49.853 16:06:20 -- target/invalid.sh@74 -- # [[ request: 00:15:49.853 { 00:15:49.853 "nqn": "nqn.2016-06.io.spdk:cnode26453", 00:15:49.853 "min_cntlid": 0, 00:15:49.853 "method": "nvmf_create_subsystem", 00:15:49.853 "req_id": 1 00:15:49.853 } 00:15:49.853 Got JSON-RPC error response 00:15:49.853 response: 00:15:49.853 { 00:15:49.853 "code": -32602, 00:15:49.853 "message": "Invalid cntlid range [0-65519]" 00:15:49.853 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:49.853 16:06:20 -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14607 -i 65520 00:15:50.113 [2024-11-20 16:06:20.804825] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14607: invalid cntlid range [65520-65519] 00:15:50.113 16:06:20 -- target/invalid.sh@75 -- # out='request: 00:15:50.113 { 00:15:50.113 "nqn": "nqn.2016-06.io.spdk:cnode14607", 00:15:50.113 "min_cntlid": 65520, 00:15:50.113 "method": "nvmf_create_subsystem", 00:15:50.113 "req_id": 1 00:15:50.113 } 00:15:50.113 Got JSON-RPC error response 00:15:50.113 response: 00:15:50.113 { 00:15:50.113 "code": -32602, 00:15:50.113 "message": "Invalid cntlid range [65520-65519]" 00:15:50.113 }' 00:15:50.113 16:06:20 -- target/invalid.sh@76 -- # [[ request: 00:15:50.113 { 00:15:50.113 "nqn": "nqn.2016-06.io.spdk:cnode14607", 00:15:50.113 "min_cntlid": 65520, 00:15:50.113 "method": "nvmf_create_subsystem", 00:15:50.113 "req_id": 1 00:15:50.113 } 00:15:50.113 Got JSON-RPC error response 00:15:50.113 response: 00:15:50.113 { 00:15:50.113 "code": -32602, 00:15:50.113 "message": "Invalid cntlid range [65520-65519]" 00:15:50.113 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.113 16:06:20 -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15128 -I 0 00:15:50.372 [2024-11-20 16:06:21.005532] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15128: invalid cntlid range [1-0] 00:15:50.372 16:06:21 -- target/invalid.sh@77 -- # out='request: 00:15:50.372 { 00:15:50.372 "nqn": "nqn.2016-06.io.spdk:cnode15128", 00:15:50.372 "max_cntlid": 0, 00:15:50.372 "method": "nvmf_create_subsystem", 00:15:50.372 "req_id": 1 00:15:50.372 } 00:15:50.372 Got JSON-RPC error response 00:15:50.372 response: 00:15:50.372 { 00:15:50.372 "code": -32602, 00:15:50.372 "message": "Invalid cntlid range [1-0]" 00:15:50.372 }' 00:15:50.372 16:06:21 -- target/invalid.sh@78 -- # [[ request: 00:15:50.372 { 00:15:50.372 "nqn": "nqn.2016-06.io.spdk:cnode15128", 00:15:50.372 "max_cntlid": 0, 00:15:50.372 "method": "nvmf_create_subsystem", 00:15:50.372 "req_id": 1 00:15:50.372 } 00:15:50.372 Got JSON-RPC error response 00:15:50.372 response: 00:15:50.372 { 00:15:50.372 "code": -32602, 00:15:50.372 "message": "Invalid cntlid range [1-0]" 00:15:50.372 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.372 16:06:21 -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8398 -I 65520 00:15:50.632 [2024-11-20 16:06:21.198255] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode8398: invalid cntlid range [1-65520] 00:15:50.632 16:06:21 -- target/invalid.sh@79 -- # out='request: 00:15:50.632 { 00:15:50.632 "nqn": "nqn.2016-06.io.spdk:cnode8398", 00:15:50.632 "max_cntlid": 65520, 00:15:50.632 "method": "nvmf_create_subsystem", 00:15:50.632 "req_id": 1 00:15:50.632 } 00:15:50.632 Got JSON-RPC error response 00:15:50.632 response: 00:15:50.632 { 00:15:50.632 "code": -32602, 00:15:50.632 "message": "Invalid cntlid range [1-65520]" 00:15:50.632 }' 00:15:50.632 16:06:21 -- target/invalid.sh@80 -- # [[ request: 00:15:50.632 { 00:15:50.632 "nqn": "nqn.2016-06.io.spdk:cnode8398", 00:15:50.632 "max_cntlid": 65520, 00:15:50.632 "method": "nvmf_create_subsystem", 00:15:50.632 "req_id": 1 00:15:50.632 } 00:15:50.632 Got JSON-RPC error response 00:15:50.632 response: 00:15:50.632 { 00:15:50.632 "code": -32602, 00:15:50.632 "message": "Invalid cntlid range [1-65520]" 00:15:50.632 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.632 16:06:21 -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode23791 -i 6 -I 5 00:15:50.632 [2024-11-20 16:06:21.390966] nvmf_rpc.c: 439:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode23791: invalid cntlid range [6-5] 00:15:50.632 16:06:21 -- target/invalid.sh@83 -- # out='request: 00:15:50.632 { 00:15:50.632 "nqn": "nqn.2016-06.io.spdk:cnode23791", 00:15:50.632 "min_cntlid": 6, 00:15:50.632 "max_cntlid": 5, 00:15:50.632 "method": "nvmf_create_subsystem", 00:15:50.632 "req_id": 1 00:15:50.632 } 00:15:50.632 Got JSON-RPC error response 00:15:50.632 response: 00:15:50.632 { 00:15:50.632 "code": -32602, 00:15:50.632 "message": "Invalid cntlid range [6-5]" 00:15:50.632 }' 00:15:50.632 16:06:21 -- target/invalid.sh@84 -- # [[ request: 00:15:50.632 { 00:15:50.632 "nqn": "nqn.2016-06.io.spdk:cnode23791", 00:15:50.632 "min_cntlid": 6, 00:15:50.632 "max_cntlid": 5, 00:15:50.632 "method": "nvmf_create_subsystem", 00:15:50.632 "req_id": 1 00:15:50.632 } 00:15:50.632 Got JSON-RPC error response 00:15:50.632 response: 00:15:50.632 { 00:15:50.632 "code": -32602, 00:15:50.632 "message": "Invalid cntlid range [6-5]" 00:15:50.632 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:15:50.632 16:06:21 -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:15:50.892 16:06:21 -- target/invalid.sh@87 -- # out='request: 00:15:50.892 { 00:15:50.892 "name": "foobar", 00:15:50.892 "method": "nvmf_delete_target", 00:15:50.892 "req_id": 1 00:15:50.892 } 00:15:50.892 Got JSON-RPC error response 00:15:50.892 response: 00:15:50.892 { 00:15:50.892 "code": -32602, 00:15:50.892 "message": "The specified target doesn'\''t exist, cannot delete it." 00:15:50.892 }' 00:15:50.892 16:06:21 -- target/invalid.sh@88 -- # [[ request: 00:15:50.892 { 00:15:50.892 "name": "foobar", 00:15:50.892 "method": "nvmf_delete_target", 00:15:50.892 "req_id": 1 00:15:50.892 } 00:15:50.892 Got JSON-RPC error response 00:15:50.892 response: 00:15:50.892 { 00:15:50.892 "code": -32602, 00:15:50.892 "message": "The specified target doesn't exist, cannot delete it." 00:15:50.892 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:15:50.892 16:06:21 -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:15:50.892 16:06:21 -- target/invalid.sh@91 -- # nvmftestfini 00:15:50.892 16:06:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:15:50.892 16:06:21 -- nvmf/common.sh@116 -- # sync 00:15:50.892 16:06:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:15:50.892 16:06:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:15:50.892 16:06:21 -- nvmf/common.sh@119 -- # set +e 00:15:50.892 16:06:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:15:50.892 16:06:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:15:50.892 rmmod nvme_rdma 00:15:50.892 rmmod nvme_fabrics 00:15:50.892 16:06:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:15:50.892 16:06:21 -- nvmf/common.sh@123 -- # set -e 00:15:50.892 16:06:21 -- nvmf/common.sh@124 -- # return 0 00:15:50.892 16:06:21 -- nvmf/common.sh@477 -- # '[' -n 1302191 ']' 00:15:50.892 16:06:21 -- nvmf/common.sh@478 -- # killprocess 1302191 00:15:50.892 16:06:21 -- common/autotest_common.sh@936 -- # '[' -z 1302191 ']' 00:15:50.892 16:06:21 -- common/autotest_common.sh@940 -- # kill -0 1302191 00:15:50.892 16:06:21 -- common/autotest_common.sh@941 -- # uname 00:15:50.892 16:06:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:15:50.892 16:06:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1302191 00:15:50.892 16:06:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:15:50.892 16:06:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:15:50.892 16:06:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1302191' 00:15:50.892 killing process with pid 1302191 00:15:50.892 16:06:21 -- common/autotest_common.sh@955 -- # kill 1302191 00:15:50.892 16:06:21 -- common/autotest_common.sh@960 -- # wait 1302191 00:15:51.152 16:06:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:15:51.152 16:06:21 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:15:51.152 00:15:51.152 real 0m11.359s 00:15:51.152 user 0m21.695s 00:15:51.152 sys 0m6.250s 00:15:51.152 16:06:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:15:51.152 16:06:21 -- common/autotest_common.sh@10 -- # set +x 00:15:51.152 ************************************ 00:15:51.152 END TEST nvmf_invalid 00:15:51.152 ************************************ 00:15:51.152 16:06:21 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:51.152 16:06:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:15:51.152 16:06:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:15:51.152 16:06:21 -- common/autotest_common.sh@10 -- # set +x 00:15:51.152 ************************************ 00:15:51.152 START TEST nvmf_abort 00:15:51.152 ************************************ 00:15:51.152 16:06:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=rdma 00:15:51.412 * Looking for test storage... 00:15:51.412 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:15:51.412 16:06:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:15:51.412 16:06:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:15:51.412 16:06:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:15:51.412 16:06:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:15:51.412 16:06:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:15:51.412 16:06:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:15:51.412 16:06:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:15:51.412 16:06:22 -- scripts/common.sh@335 -- # IFS=.-: 00:15:51.412 16:06:22 -- scripts/common.sh@335 -- # read -ra ver1 00:15:51.412 16:06:22 -- scripts/common.sh@336 -- # IFS=.-: 00:15:51.413 16:06:22 -- scripts/common.sh@336 -- # read -ra ver2 00:15:51.413 16:06:22 -- scripts/common.sh@337 -- # local 'op=<' 00:15:51.413 16:06:22 -- scripts/common.sh@339 -- # ver1_l=2 00:15:51.413 16:06:22 -- scripts/common.sh@340 -- # ver2_l=1 00:15:51.413 16:06:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:15:51.413 16:06:22 -- scripts/common.sh@343 -- # case "$op" in 00:15:51.413 16:06:22 -- scripts/common.sh@344 -- # : 1 00:15:51.413 16:06:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:15:51.413 16:06:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:51.413 16:06:22 -- scripts/common.sh@364 -- # decimal 1 00:15:51.413 16:06:22 -- scripts/common.sh@352 -- # local d=1 00:15:51.413 16:06:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:51.413 16:06:22 -- scripts/common.sh@354 -- # echo 1 00:15:51.413 16:06:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:15:51.413 16:06:22 -- scripts/common.sh@365 -- # decimal 2 00:15:51.413 16:06:22 -- scripts/common.sh@352 -- # local d=2 00:15:51.413 16:06:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:51.413 16:06:22 -- scripts/common.sh@354 -- # echo 2 00:15:51.413 16:06:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:15:51.413 16:06:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:15:51.413 16:06:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:15:51.413 16:06:22 -- scripts/common.sh@367 -- # return 0 00:15:51.413 16:06:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:51.413 16:06:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:15:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.413 --rc genhtml_branch_coverage=1 00:15:51.413 --rc genhtml_function_coverage=1 00:15:51.413 --rc genhtml_legend=1 00:15:51.413 --rc geninfo_all_blocks=1 00:15:51.413 --rc geninfo_unexecuted_blocks=1 00:15:51.413 00:15:51.413 ' 00:15:51.413 16:06:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:15:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.413 --rc genhtml_branch_coverage=1 00:15:51.413 --rc genhtml_function_coverage=1 00:15:51.413 --rc genhtml_legend=1 00:15:51.413 --rc geninfo_all_blocks=1 00:15:51.413 --rc geninfo_unexecuted_blocks=1 00:15:51.413 00:15:51.413 ' 00:15:51.413 16:06:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:15:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.413 --rc genhtml_branch_coverage=1 00:15:51.413 --rc genhtml_function_coverage=1 00:15:51.413 --rc genhtml_legend=1 00:15:51.413 --rc geninfo_all_blocks=1 00:15:51.413 --rc geninfo_unexecuted_blocks=1 00:15:51.413 00:15:51.413 ' 00:15:51.413 16:06:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:15:51.413 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:51.413 --rc genhtml_branch_coverage=1 00:15:51.413 --rc genhtml_function_coverage=1 00:15:51.413 --rc genhtml_legend=1 00:15:51.413 --rc geninfo_all_blocks=1 00:15:51.413 --rc geninfo_unexecuted_blocks=1 00:15:51.413 00:15:51.413 ' 00:15:51.413 16:06:22 -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:15:51.413 16:06:22 -- nvmf/common.sh@7 -- # uname -s 00:15:51.413 16:06:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:51.413 16:06:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:51.413 16:06:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:51.413 16:06:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:51.413 16:06:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:51.413 16:06:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:51.413 16:06:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:51.413 16:06:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:51.413 16:06:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:51.413 16:06:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:51.413 16:06:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:15:51.413 16:06:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:15:51.413 16:06:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:51.413 16:06:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:51.413 16:06:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:51.413 16:06:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:15:51.413 16:06:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:51.413 16:06:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:51.413 16:06:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:51.413 16:06:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.413 16:06:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.413 16:06:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.413 16:06:22 -- paths/export.sh@5 -- # export PATH 00:15:51.413 16:06:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:51.413 16:06:22 -- nvmf/common.sh@46 -- # : 0 00:15:51.413 16:06:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:15:51.413 16:06:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:15:51.413 16:06:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:15:51.413 16:06:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:51.413 16:06:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:51.413 16:06:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:15:51.413 16:06:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:15:51.413 16:06:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:15:51.413 16:06:22 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:51.413 16:06:22 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:15:51.413 16:06:22 -- target/abort.sh@14 -- # nvmftestinit 00:15:51.413 16:06:22 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:15:51.413 16:06:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:51.413 16:06:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:15:51.413 16:06:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:15:51.413 16:06:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:15:51.413 16:06:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:51.413 16:06:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:51.413 16:06:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:51.413 16:06:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:15:51.413 16:06:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:15:51.413 16:06:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:15:51.413 16:06:22 -- common/autotest_common.sh@10 -- # set +x 00:15:58.092 16:06:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:15:58.092 16:06:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:15:58.092 16:06:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:15:58.092 16:06:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:15:58.092 16:06:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:15:58.092 16:06:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:15:58.092 16:06:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:15:58.092 16:06:28 -- nvmf/common.sh@294 -- # net_devs=() 00:15:58.092 16:06:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:15:58.092 16:06:28 -- nvmf/common.sh@295 -- # e810=() 00:15:58.092 16:06:28 -- nvmf/common.sh@295 -- # local -ga e810 00:15:58.092 16:06:28 -- nvmf/common.sh@296 -- # x722=() 00:15:58.092 16:06:28 -- nvmf/common.sh@296 -- # local -ga x722 00:15:58.092 16:06:28 -- nvmf/common.sh@297 -- # mlx=() 00:15:58.092 16:06:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:15:58.092 16:06:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:58.092 16:06:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:15:58.092 16:06:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:15:58.092 16:06:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:15:58.092 16:06:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:15:58.092 16:06:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:15:58.092 16:06:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:58.092 16:06:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:15:58.092 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:15:58.092 16:06:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.092 16:06:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:15:58.092 16:06:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:15:58.092 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:15:58.092 16:06:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:15:58.092 16:06:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:15:58.092 16:06:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:58.092 16:06:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.092 16:06:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:58.092 16:06:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.092 16:06:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:15:58.092 Found net devices under 0000:d9:00.0: mlx_0_0 00:15:58.092 16:06:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.092 16:06:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:15:58.092 16:06:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:58.092 16:06:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:15:58.092 16:06:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:58.092 16:06:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:15:58.092 Found net devices under 0000:d9:00.1: mlx_0_1 00:15:58.092 16:06:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:15:58.092 16:06:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:15:58.092 16:06:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:15:58.092 16:06:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:15:58.092 16:06:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:15:58.092 16:06:28 -- nvmf/common.sh@57 -- # uname 00:15:58.092 16:06:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:15:58.092 16:06:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:15:58.092 16:06:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:15:58.092 16:06:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:15:58.092 16:06:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:15:58.092 16:06:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:15:58.092 16:06:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:15:58.092 16:06:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:15:58.092 16:06:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:15:58.092 16:06:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:15:58.092 16:06:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:15:58.092 16:06:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.092 16:06:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:58.092 16:06:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:58.092 16:06:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:58.092 16:06:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:58.092 16:06:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:58.092 16:06:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.092 16:06:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:58.092 16:06:28 -- nvmf/common.sh@104 -- # continue 2 00:15:58.092 16:06:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:58.092 16:06:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.092 16:06:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.092 16:06:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:58.092 16:06:28 -- nvmf/common.sh@104 -- # continue 2 00:15:58.092 16:06:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:58.092 16:06:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:15:58.092 16:06:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:58.092 16:06:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:58.092 16:06:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:58.092 16:06:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:58.092 16:06:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:15:58.092 16:06:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:15:58.092 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.092 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:15:58.092 altname enp217s0f0np0 00:15:58.092 altname ens818f0np0 00:15:58.092 inet 192.168.100.8/24 scope global mlx_0_0 00:15:58.092 valid_lft forever preferred_lft forever 00:15:58.092 16:06:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:15:58.092 16:06:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:15:58.092 16:06:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:58.092 16:06:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:58.092 16:06:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:58.092 16:06:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:58.092 16:06:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:15:58.092 16:06:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:15:58.092 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:15:58.092 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:15:58.092 altname enp217s0f1np1 00:15:58.092 altname ens818f1np1 00:15:58.092 inet 192.168.100.9/24 scope global mlx_0_1 00:15:58.092 valid_lft forever preferred_lft forever 00:15:58.092 16:06:28 -- nvmf/common.sh@410 -- # return 0 00:15:58.092 16:06:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:15:58.092 16:06:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:15:58.092 16:06:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:15:58.092 16:06:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:15:58.092 16:06:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:15:58.092 16:06:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:15:58.352 16:06:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:15:58.352 16:06:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:15:58.352 16:06:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:15:58.352 16:06:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:15:58.352 16:06:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:58.352 16:06:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.352 16:06:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:15:58.352 16:06:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:15:58.352 16:06:28 -- nvmf/common.sh@104 -- # continue 2 00:15:58.352 16:06:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:15:58.352 16:06:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.352 16:06:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:15:58.352 16:06:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:15:58.352 16:06:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:15:58.352 16:06:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:15:58.352 16:06:28 -- nvmf/common.sh@104 -- # continue 2 00:15:58.352 16:06:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:58.352 16:06:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:15:58.352 16:06:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:15:58.352 16:06:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:15:58.352 16:06:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:58.352 16:06:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:58.352 16:06:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:15:58.352 16:06:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:15:58.352 16:06:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:15:58.352 16:06:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:15:58.352 16:06:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:15:58.352 16:06:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:15:58.352 16:06:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:15:58.352 192.168.100.9' 00:15:58.352 16:06:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:15:58.352 192.168.100.9' 00:15:58.352 16:06:28 -- nvmf/common.sh@445 -- # head -n 1 00:15:58.352 16:06:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:15:58.352 16:06:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:15:58.352 192.168.100.9' 00:15:58.352 16:06:28 -- nvmf/common.sh@446 -- # tail -n +2 00:15:58.352 16:06:28 -- nvmf/common.sh@446 -- # head -n 1 00:15:58.352 16:06:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:15:58.352 16:06:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:15:58.352 16:06:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:15:58.352 16:06:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:15:58.352 16:06:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:15:58.352 16:06:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:15:58.352 16:06:28 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:15:58.352 16:06:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:15:58.352 16:06:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:58.352 16:06:28 -- common/autotest_common.sh@10 -- # set +x 00:15:58.352 16:06:29 -- nvmf/common.sh@469 -- # nvmfpid=1306614 00:15:58.352 16:06:29 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:58.352 16:06:29 -- nvmf/common.sh@470 -- # waitforlisten 1306614 00:15:58.352 16:06:29 -- common/autotest_common.sh@829 -- # '[' -z 1306614 ']' 00:15:58.352 16:06:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:58.352 16:06:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:58.352 16:06:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:58.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:58.353 16:06:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:58.353 16:06:29 -- common/autotest_common.sh@10 -- # set +x 00:15:58.353 [2024-11-20 16:06:29.052439] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:15:58.353 [2024-11-20 16:06:29.052488] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:58.353 EAL: No free 2048 kB hugepages reported on node 1 00:15:58.353 [2024-11-20 16:06:29.122939] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:58.611 [2024-11-20 16:06:29.159336] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:15:58.611 [2024-11-20 16:06:29.159457] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:58.611 [2024-11-20 16:06:29.159468] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:58.611 [2024-11-20 16:06:29.159477] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:58.611 [2024-11-20 16:06:29.159539] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:15:58.611 [2024-11-20 16:06:29.159622] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:15:58.611 [2024-11-20 16:06:29.159624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:15:59.180 16:06:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:59.180 16:06:29 -- common/autotest_common.sh@862 -- # return 0 00:15:59.180 16:06:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:15:59.180 16:06:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:59.180 16:06:29 -- common/autotest_common.sh@10 -- # set +x 00:15:59.180 16:06:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:59.180 16:06:29 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 -a 256 00:15:59.180 16:06:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.180 16:06:29 -- common/autotest_common.sh@10 -- # set +x 00:15:59.180 [2024-11-20 16:06:29.935746] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2014920/0x2018dd0) succeed. 00:15:59.180 [2024-11-20 16:06:29.944725] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x2015e20/0x205a470) succeed. 00:15:59.440 16:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.440 16:06:30 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:15:59.440 16:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.440 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:59.440 Malloc0 00:15:59.440 16:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.440 16:06:30 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:15:59.440 16:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.440 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:59.440 Delay0 00:15:59.440 16:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.440 16:06:30 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:59.440 16:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.440 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:59.440 16:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.440 16:06:30 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:15:59.440 16:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.440 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:59.440 16:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.440 16:06:30 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:15:59.440 16:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.440 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:59.440 [2024-11-20 16:06:30.094821] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:15:59.440 16:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.440 16:06:30 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:15:59.440 16:06:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.440 16:06:30 -- common/autotest_common.sh@10 -- # set +x 00:15:59.440 16:06:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.440 16:06:30 -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/abort -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:15:59.440 EAL: No free 2048 kB hugepages reported on node 1 00:15:59.440 [2024-11-20 16:06:30.184253] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:16:01.977 Initializing NVMe Controllers 00:16:01.977 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:16:01.977 controller IO queue size 128 less than required 00:16:01.977 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:16:01.977 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:16:01.977 Initialization complete. Launching workers. 00:16:01.977 NS: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 51765 00:16:01.977 CTRLR: RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 51826, failed to submit 62 00:16:01.977 success 51765, unsuccess 61, failed 0 00:16:01.977 16:06:32 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:01.977 16:06:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.977 16:06:32 -- common/autotest_common.sh@10 -- # set +x 00:16:01.977 16:06:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.977 16:06:32 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:16:01.977 16:06:32 -- target/abort.sh@38 -- # nvmftestfini 00:16:01.977 16:06:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:01.977 16:06:32 -- nvmf/common.sh@116 -- # sync 00:16:01.977 16:06:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:01.977 16:06:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:01.977 16:06:32 -- nvmf/common.sh@119 -- # set +e 00:16:01.977 16:06:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:01.977 16:06:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:01.977 rmmod nvme_rdma 00:16:01.977 rmmod nvme_fabrics 00:16:01.977 16:06:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:01.977 16:06:32 -- nvmf/common.sh@123 -- # set -e 00:16:01.977 16:06:32 -- nvmf/common.sh@124 -- # return 0 00:16:01.977 16:06:32 -- nvmf/common.sh@477 -- # '[' -n 1306614 ']' 00:16:01.977 16:06:32 -- nvmf/common.sh@478 -- # killprocess 1306614 00:16:01.977 16:06:32 -- common/autotest_common.sh@936 -- # '[' -z 1306614 ']' 00:16:01.977 16:06:32 -- common/autotest_common.sh@940 -- # kill -0 1306614 00:16:01.977 16:06:32 -- common/autotest_common.sh@941 -- # uname 00:16:01.977 16:06:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:01.977 16:06:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1306614 00:16:01.977 16:06:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:01.977 16:06:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:01.977 16:06:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1306614' 00:16:01.977 killing process with pid 1306614 00:16:01.977 16:06:32 -- common/autotest_common.sh@955 -- # kill 1306614 00:16:01.977 16:06:32 -- common/autotest_common.sh@960 -- # wait 1306614 00:16:01.977 16:06:32 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:01.977 16:06:32 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:01.977 00:16:01.977 real 0m10.736s 00:16:01.977 user 0m14.593s 00:16:01.977 sys 0m5.766s 00:16:01.977 16:06:32 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:01.977 16:06:32 -- common/autotest_common.sh@10 -- # set +x 00:16:01.977 ************************************ 00:16:01.977 END TEST nvmf_abort 00:16:01.977 ************************************ 00:16:01.977 16:06:32 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:16:01.977 16:06:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:01.977 16:06:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:01.977 16:06:32 -- common/autotest_common.sh@10 -- # set +x 00:16:01.977 ************************************ 00:16:01.977 START TEST nvmf_ns_hotplug_stress 00:16:01.977 ************************************ 00:16:01.977 16:06:32 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=rdma 00:16:02.237 * Looking for test storage... 00:16:02.237 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:02.237 16:06:32 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:02.237 16:06:32 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:02.237 16:06:32 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:02.237 16:06:32 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:02.237 16:06:32 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:02.237 16:06:32 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:02.237 16:06:32 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:02.237 16:06:32 -- scripts/common.sh@335 -- # IFS=.-: 00:16:02.237 16:06:32 -- scripts/common.sh@335 -- # read -ra ver1 00:16:02.237 16:06:32 -- scripts/common.sh@336 -- # IFS=.-: 00:16:02.237 16:06:32 -- scripts/common.sh@336 -- # read -ra ver2 00:16:02.237 16:06:32 -- scripts/common.sh@337 -- # local 'op=<' 00:16:02.237 16:06:32 -- scripts/common.sh@339 -- # ver1_l=2 00:16:02.237 16:06:32 -- scripts/common.sh@340 -- # ver2_l=1 00:16:02.237 16:06:32 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:02.237 16:06:32 -- scripts/common.sh@343 -- # case "$op" in 00:16:02.237 16:06:32 -- scripts/common.sh@344 -- # : 1 00:16:02.237 16:06:32 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:02.237 16:06:32 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:02.237 16:06:32 -- scripts/common.sh@364 -- # decimal 1 00:16:02.237 16:06:32 -- scripts/common.sh@352 -- # local d=1 00:16:02.237 16:06:32 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:02.237 16:06:32 -- scripts/common.sh@354 -- # echo 1 00:16:02.237 16:06:32 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:02.237 16:06:32 -- scripts/common.sh@365 -- # decimal 2 00:16:02.237 16:06:32 -- scripts/common.sh@352 -- # local d=2 00:16:02.237 16:06:32 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:02.237 16:06:32 -- scripts/common.sh@354 -- # echo 2 00:16:02.237 16:06:32 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:02.237 16:06:32 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:02.237 16:06:32 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:02.237 16:06:32 -- scripts/common.sh@367 -- # return 0 00:16:02.237 16:06:32 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:02.237 16:06:32 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:02.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.237 --rc genhtml_branch_coverage=1 00:16:02.237 --rc genhtml_function_coverage=1 00:16:02.237 --rc genhtml_legend=1 00:16:02.237 --rc geninfo_all_blocks=1 00:16:02.237 --rc geninfo_unexecuted_blocks=1 00:16:02.237 00:16:02.237 ' 00:16:02.237 16:06:32 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:02.237 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.237 --rc genhtml_branch_coverage=1 00:16:02.237 --rc genhtml_function_coverage=1 00:16:02.237 --rc genhtml_legend=1 00:16:02.238 --rc geninfo_all_blocks=1 00:16:02.238 --rc geninfo_unexecuted_blocks=1 00:16:02.238 00:16:02.238 ' 00:16:02.238 16:06:32 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:02.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.238 --rc genhtml_branch_coverage=1 00:16:02.238 --rc genhtml_function_coverage=1 00:16:02.238 --rc genhtml_legend=1 00:16:02.238 --rc geninfo_all_blocks=1 00:16:02.238 --rc geninfo_unexecuted_blocks=1 00:16:02.238 00:16:02.238 ' 00:16:02.238 16:06:32 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:02.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:02.238 --rc genhtml_branch_coverage=1 00:16:02.238 --rc genhtml_function_coverage=1 00:16:02.238 --rc genhtml_legend=1 00:16:02.238 --rc geninfo_all_blocks=1 00:16:02.238 --rc geninfo_unexecuted_blocks=1 00:16:02.238 00:16:02.238 ' 00:16:02.238 16:06:32 -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:02.238 16:06:32 -- nvmf/common.sh@7 -- # uname -s 00:16:02.238 16:06:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:02.238 16:06:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:02.238 16:06:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:02.238 16:06:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:02.238 16:06:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:02.238 16:06:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:02.238 16:06:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:02.238 16:06:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:02.238 16:06:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:02.238 16:06:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:02.238 16:06:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:02.238 16:06:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:02.238 16:06:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:02.238 16:06:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:02.238 16:06:32 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:02.238 16:06:32 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:02.238 16:06:32 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:02.238 16:06:32 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:02.238 16:06:32 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:02.238 16:06:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.238 16:06:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.238 16:06:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.238 16:06:32 -- paths/export.sh@5 -- # export PATH 00:16:02.238 16:06:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:02.238 16:06:32 -- nvmf/common.sh@46 -- # : 0 00:16:02.238 16:06:32 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:02.238 16:06:32 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:02.238 16:06:32 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:02.238 16:06:32 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:02.238 16:06:32 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:02.238 16:06:32 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:02.238 16:06:32 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:02.238 16:06:32 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:02.238 16:06:32 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:16:02.238 16:06:32 -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:16:02.238 16:06:32 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:02.238 16:06:32 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:02.238 16:06:32 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:02.238 16:06:32 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:02.238 16:06:32 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:02.238 16:06:32 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:02.238 16:06:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:02.238 16:06:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:02.238 16:06:32 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:02.238 16:06:32 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:02.238 16:06:32 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:02.238 16:06:32 -- common/autotest_common.sh@10 -- # set +x 00:16:08.806 16:06:38 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:08.806 16:06:38 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:08.806 16:06:38 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:08.806 16:06:38 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:08.806 16:06:38 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:08.806 16:06:38 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:08.806 16:06:38 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:08.806 16:06:38 -- nvmf/common.sh@294 -- # net_devs=() 00:16:08.806 16:06:38 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:08.806 16:06:38 -- nvmf/common.sh@295 -- # e810=() 00:16:08.806 16:06:38 -- nvmf/common.sh@295 -- # local -ga e810 00:16:08.806 16:06:38 -- nvmf/common.sh@296 -- # x722=() 00:16:08.806 16:06:38 -- nvmf/common.sh@296 -- # local -ga x722 00:16:08.806 16:06:38 -- nvmf/common.sh@297 -- # mlx=() 00:16:08.806 16:06:38 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:08.806 16:06:38 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.806 16:06:38 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:08.806 16:06:38 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:08.806 16:06:38 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:08.806 16:06:38 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:08.806 16:06:38 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:08.806 16:06:38 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:08.806 16:06:38 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:08.806 16:06:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:08.806 16:06:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:08.806 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:08.806 16:06:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:08.807 16:06:38 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:08.807 16:06:38 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:08.807 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:08.807 16:06:38 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:08.807 16:06:38 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:08.807 16:06:38 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:08.807 16:06:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.807 16:06:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:08.807 16:06:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.807 16:06:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:08.807 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:08.807 16:06:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.807 16:06:38 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:08.807 16:06:38 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.807 16:06:38 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:08.807 16:06:38 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.807 16:06:38 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:08.807 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:08.807 16:06:38 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.807 16:06:38 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:08.807 16:06:38 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:08.807 16:06:38 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:08.807 16:06:38 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:08.807 16:06:38 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:08.807 16:06:38 -- nvmf/common.sh@57 -- # uname 00:16:08.807 16:06:38 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:08.807 16:06:38 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:08.807 16:06:38 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:08.807 16:06:38 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:08.807 16:06:38 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:08.807 16:06:38 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:08.807 16:06:38 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:08.807 16:06:38 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:08.807 16:06:38 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:08.807 16:06:38 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:08.807 16:06:38 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:08.807 16:06:38 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:08.807 16:06:38 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:08.807 16:06:38 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:08.807 16:06:38 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:08.807 16:06:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:08.807 16:06:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:08.807 16:06:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:08.807 16:06:39 -- nvmf/common.sh@104 -- # continue 2 00:16:08.807 16:06:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:08.807 16:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:08.807 16:06:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:08.807 16:06:39 -- nvmf/common.sh@104 -- # continue 2 00:16:08.807 16:06:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:08.807 16:06:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:08.807 16:06:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:08.807 16:06:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:08.807 16:06:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:08.807 16:06:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:08.807 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:08.807 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:08.807 altname enp217s0f0np0 00:16:08.807 altname ens818f0np0 00:16:08.807 inet 192.168.100.8/24 scope global mlx_0_0 00:16:08.807 valid_lft forever preferred_lft forever 00:16:08.807 16:06:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:08.807 16:06:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:08.807 16:06:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:08.807 16:06:39 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:08.807 16:06:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:08.807 16:06:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:08.807 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:08.807 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:08.807 altname enp217s0f1np1 00:16:08.807 altname ens818f1np1 00:16:08.807 inet 192.168.100.9/24 scope global mlx_0_1 00:16:08.807 valid_lft forever preferred_lft forever 00:16:08.807 16:06:39 -- nvmf/common.sh@410 -- # return 0 00:16:08.807 16:06:39 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:08.807 16:06:39 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:08.807 16:06:39 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:08.807 16:06:39 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:08.807 16:06:39 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:08.807 16:06:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:08.807 16:06:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:08.807 16:06:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:08.807 16:06:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:08.807 16:06:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:08.807 16:06:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:08.807 16:06:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:08.807 16:06:39 -- nvmf/common.sh@104 -- # continue 2 00:16:08.807 16:06:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:08.807 16:06:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:08.807 16:06:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:08.807 16:06:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:08.807 16:06:39 -- nvmf/common.sh@104 -- # continue 2 00:16:08.807 16:06:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:08.807 16:06:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:08.807 16:06:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:08.807 16:06:39 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:08.807 16:06:39 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:08.807 16:06:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:08.807 16:06:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:08.807 16:06:39 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:08.807 192.168.100.9' 00:16:08.807 16:06:39 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:08.807 192.168.100.9' 00:16:08.807 16:06:39 -- nvmf/common.sh@445 -- # head -n 1 00:16:08.807 16:06:39 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:08.807 16:06:39 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:08.807 192.168.100.9' 00:16:08.807 16:06:39 -- nvmf/common.sh@446 -- # head -n 1 00:16:08.807 16:06:39 -- nvmf/common.sh@446 -- # tail -n +2 00:16:08.807 16:06:39 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:08.807 16:06:39 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:08.807 16:06:39 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:08.807 16:06:39 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:08.807 16:06:39 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:08.807 16:06:39 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:08.807 16:06:39 -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:16:08.807 16:06:39 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:08.807 16:06:39 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:08.807 16:06:39 -- common/autotest_common.sh@10 -- # set +x 00:16:08.807 16:06:39 -- nvmf/common.sh@469 -- # nvmfpid=1310364 00:16:08.808 16:06:39 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:08.808 16:06:39 -- nvmf/common.sh@470 -- # waitforlisten 1310364 00:16:08.808 16:06:39 -- common/autotest_common.sh@829 -- # '[' -z 1310364 ']' 00:16:08.808 16:06:39 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.808 16:06:39 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:08.808 16:06:39 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.808 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.808 16:06:39 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:08.808 16:06:39 -- common/autotest_common.sh@10 -- # set +x 00:16:08.808 [2024-11-20 16:06:39.227853] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:08.808 [2024-11-20 16:06:39.227909] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.808 EAL: No free 2048 kB hugepages reported on node 1 00:16:08.808 [2024-11-20 16:06:39.298857] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:08.808 [2024-11-20 16:06:39.336884] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:08.808 [2024-11-20 16:06:39.337002] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.808 [2024-11-20 16:06:39.337013] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.808 [2024-11-20 16:06:39.337024] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.808 [2024-11-20 16:06:39.337136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.808 [2024-11-20 16:06:39.337199] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.808 [2024-11-20 16:06:39.337201] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.377 16:06:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.377 16:06:40 -- common/autotest_common.sh@862 -- # return 0 00:16:09.377 16:06:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:09.377 16:06:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.377 16:06:40 -- common/autotest_common.sh@10 -- # set +x 00:16:09.377 16:06:40 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.377 16:06:40 -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:16:09.377 16:06:40 -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:09.637 [2024-11-20 16:06:40.285808] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1645900/0x1649db0) succeed. 00:16:09.637 [2024-11-20 16:06:40.294921] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1646e00/0x168b450) succeed. 00:16:09.637 16:06:40 -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:09.896 16:06:40 -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:10.156 [2024-11-20 16:06:40.746599] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:10.156 16:06:40 -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:16:10.415 16:06:40 -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:16:10.415 Malloc0 00:16:10.415 16:06:41 -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:10.675 Delay0 00:16:10.675 16:06:41 -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:10.935 16:06:41 -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:16:10.935 NULL1 00:16:10.935 16:06:41 -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:16:11.194 16:06:41 -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1310925 00:16:11.194 16:06:41 -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:16:11.194 16:06:41 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:11.194 16:06:41 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.194 EAL: No free 2048 kB hugepages reported on node 1 00:16:12.574 Read completed with error (sct=0, sc=11) 00:16:12.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.574 16:06:43 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:12.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.574 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:12.574 16:06:43 -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:16:12.574 16:06:43 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:16:12.833 true 00:16:12.833 16:06:43 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:12.833 16:06:43 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:13.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.770 16:06:44 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:13.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:13.770 16:06:44 -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:16:13.770 16:06:44 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:16:14.029 true 00:16:14.029 16:06:44 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:14.029 16:06:44 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:14.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.967 16:06:45 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:14.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.967 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:14.967 16:06:45 -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:16:14.967 16:06:45 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:16:15.226 true 00:16:15.226 16:06:45 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:15.226 16:06:45 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:16.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.164 16:06:46 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:16.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.164 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:16.164 16:06:46 -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:16:16.164 16:06:46 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:16:16.423 true 00:16:16.423 16:06:47 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:16.423 16:06:47 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.359 16:06:47 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.359 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:17.359 16:06:48 -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:16:17.359 16:06:48 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:16:17.618 true 00:16:17.618 16:06:48 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:17.618 16:06:48 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:18.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.556 16:06:49 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:18.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.556 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:18.556 16:06:49 -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:16:18.556 16:06:49 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:16:18.818 true 00:16:18.818 16:06:49 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:18.818 16:06:49 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:19.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.755 16:06:50 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:19.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.755 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:19.755 16:06:50 -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:16:19.755 16:06:50 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:16:20.014 true 00:16:20.014 16:06:50 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:20.014 16:06:50 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:20.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.951 16:06:51 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:20.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:20.951 16:06:51 -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:16:20.951 16:06:51 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:16:21.210 true 00:16:21.210 16:06:51 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:21.210 16:06:51 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:22.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.146 16:06:52 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:22.146 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.147 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:22.147 16:06:52 -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:16:22.147 16:06:52 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:16:22.405 true 00:16:22.405 16:06:53 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:22.405 16:06:53 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:23.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.341 16:06:53 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:23.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.341 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:23.341 16:06:54 -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:16:23.341 16:06:54 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:16:23.600 true 00:16:23.600 16:06:54 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:23.600 16:06:54 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:24.536 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.537 16:06:55 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:24.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.537 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:24.537 16:06:55 -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:16:24.537 16:06:55 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:16:24.796 true 00:16:24.796 16:06:55 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:24.796 16:06:55 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:25.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.736 16:06:56 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:25.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.736 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:25.736 16:06:56 -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:16:25.736 16:06:56 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:16:25.996 true 00:16:25.996 16:06:56 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:25.996 16:06:56 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:26.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.936 16:06:57 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:26.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.936 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:26.936 16:06:57 -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:16:26.936 16:06:57 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:16:27.195 true 00:16:27.196 16:06:57 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:27.196 16:06:57 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:28.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.134 16:06:58 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:28.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.134 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.135 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:28.135 16:06:58 -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:16:28.135 16:06:58 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:16:28.394 true 00:16:28.394 16:06:59 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:28.394 16:06:59 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:29.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.332 16:06:59 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:29.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.332 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:29.332 16:07:00 -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:16:29.332 16:07:00 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:16:29.590 true 00:16:29.590 16:07:00 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:29.590 16:07:00 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:30.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.530 16:07:01 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:30.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:30.530 16:07:01 -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:16:30.530 16:07:01 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:16:30.789 true 00:16:30.789 16:07:01 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:30.789 16:07:01 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:31.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.728 16:07:02 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:31.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.728 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.987 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:31.987 16:07:02 -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:16:31.987 16:07:02 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:16:31.987 true 00:16:31.987 16:07:02 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:31.987 16:07:02 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:32.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.925 16:07:03 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:32.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:32.925 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.185 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:33.185 16:07:03 -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:16:33.185 16:07:03 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:16:33.185 true 00:16:33.185 16:07:03 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:33.185 16:07:03 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:34.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.217 16:07:04 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:34.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.217 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:34.217 16:07:04 -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:16:34.217 16:07:04 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:16:34.476 true 00:16:34.476 16:07:05 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:34.476 16:07:05 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:35.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.413 16:07:05 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:35.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:35.413 16:07:06 -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:16:35.413 16:07:06 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:16:35.672 true 00:16:35.672 16:07:06 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:35.672 16:07:06 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:36.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.627 16:07:07 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:36.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:36.627 16:07:07 -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:16:36.627 16:07:07 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:16:36.886 true 00:16:36.886 16:07:07 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:36.886 16:07:07 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:37.825 16:07:08 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:37.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:37.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:38.084 16:07:08 -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:16:38.084 16:07:08 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:16:38.084 true 00:16:38.084 16:07:08 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:38.084 16:07:08 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:39.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.022 16:07:09 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:39.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.022 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:39.281 16:07:09 -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:16:39.281 16:07:09 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:16:39.281 true 00:16:39.281 16:07:10 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:39.281 16:07:10 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:40.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:40.218 16:07:10 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:40.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:40.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:40.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:40.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:40.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:40.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:40.218 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:40.476 16:07:11 -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:16:40.476 16:07:11 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:16:40.476 true 00:16:40.476 16:07:11 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:40.476 16:07:11 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.412 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.413 16:07:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:41.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:16:41.672 16:07:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:16:41.672 16:07:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:16:41.672 true 00:16:41.672 16:07:12 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:41.672 16:07:12 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:41.930 16:07:12 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.190 16:07:12 -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:16:42.190 16:07:12 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:16:42.190 true 00:16:42.448 16:07:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:42.448 16:07:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.448 16:07:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:42.707 16:07:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:16:42.707 16:07:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:16:42.966 true 00:16:42.966 16:07:13 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:42.966 16:07:13 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:42.966 16:07:13 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:43.226 16:07:13 -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:16:43.226 16:07:13 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:16:43.485 true 00:16:43.485 16:07:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:43.485 16:07:14 -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:43.485 Initializing NVMe Controllers 00:16:43.485 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:16:43.485 Controller IO queue size 128, less than required. 00:16:43.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:43.485 Controller IO queue size 128, less than required. 00:16:43.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:43.485 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:16:43.485 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:16:43.485 Initialization complete. Launching workers. 00:16:43.485 ======================================================== 00:16:43.485 Latency(us) 00:16:43.485 Device Information : IOPS MiB/s Average min max 00:16:43.485 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6154.60 3.01 18233.44 875.28 1132759.83 00:16:43.485 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 35728.50 17.45 3582.51 1620.48 281425.95 00:16:43.485 ======================================================== 00:16:43.485 Total : 41883.10 20.45 5735.42 875.28 1132759.83 00:16:43.485 00:16:43.743 16:07:14 -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:16:43.743 16:07:14 -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:16:43.743 16:07:14 -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:16:44.002 true 00:16:44.002 16:07:14 -- target/ns_hotplug_stress.sh@44 -- # kill -0 1310925 00:16:44.002 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1310925) - No such process 00:16:44.002 16:07:14 -- target/ns_hotplug_stress.sh@53 -- # wait 1310925 00:16:44.002 16:07:14 -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:44.262 16:07:14 -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:44.262 16:07:15 -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:16:44.262 16:07:15 -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:16:44.262 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:16:44.262 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:44.262 16:07:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:16:44.521 null0 00:16:44.521 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:44.521 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:44.521 16:07:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:16:44.780 null1 00:16:44.780 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:44.780 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:44.780 16:07:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:16:45.040 null2 00:16:45.040 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:45.040 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:45.040 16:07:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:16:45.040 null3 00:16:45.040 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:45.040 16:07:15 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:45.040 16:07:15 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:16:45.299 null4 00:16:45.299 16:07:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:45.299 16:07:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:45.299 16:07:16 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:16:45.559 null5 00:16:45.559 16:07:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:45.559 16:07:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:45.559 16:07:16 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:16:45.559 null6 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:16:45.819 null7 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@66 -- # wait 1316941 1316942 1316943 1316945 1316947 1316949 1316951 1316953 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:45.819 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:46.079 16:07:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.079 16:07:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:46.079 16:07:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:46.079 16:07:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.079 16:07:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:46.079 16:07:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:46.079 16:07:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.079 16:07:16 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:46.338 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.339 16:07:16 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:46.339 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:46.598 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:46.858 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:46.858 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:46.858 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:46.858 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:46.858 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:46.858 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:46.858 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:46.858 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:47.118 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:47.378 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:47.378 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:47.378 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:47.378 16:07:17 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.378 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:47.637 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:47.637 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:47.637 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:47.637 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:47.637 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:47.637 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:47.637 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:47.637 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:47.897 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:48.155 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.155 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:48.155 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:48.155 16:07:18 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:48.155 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.155 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.156 16:07:18 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:48.413 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:48.414 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.414 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:48.414 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:48.414 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:48.414 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:48.414 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:48.414 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.672 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.673 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:48.673 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:48.673 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:48.673 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:48.673 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:48.931 16:07:19 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:49.191 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:49.191 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:49.191 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:49.191 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:49.191 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:49.191 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.191 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.191 16:07:19 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:49.453 16:07:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:49.454 16:07:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:16:49.454 16:07:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:16:49.454 16:07:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:49.454 16:07:20 -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:49.713 16:07:20 -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:16:49.713 16:07:20 -- nvmf/common.sh@476 -- # nvmfcleanup 00:16:49.713 16:07:20 -- nvmf/common.sh@116 -- # sync 00:16:49.713 16:07:20 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:16:49.713 16:07:20 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:16:49.713 16:07:20 -- nvmf/common.sh@119 -- # set +e 00:16:49.713 16:07:20 -- nvmf/common.sh@120 -- # for i in {1..20} 00:16:49.713 16:07:20 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:16:49.713 rmmod nvme_rdma 00:16:49.713 rmmod nvme_fabrics 00:16:49.713 16:07:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:16:49.713 16:07:20 -- nvmf/common.sh@123 -- # set -e 00:16:49.713 16:07:20 -- nvmf/common.sh@124 -- # return 0 00:16:49.713 16:07:20 -- nvmf/common.sh@477 -- # '[' -n 1310364 ']' 00:16:49.713 16:07:20 -- nvmf/common.sh@478 -- # killprocess 1310364 00:16:49.713 16:07:20 -- common/autotest_common.sh@936 -- # '[' -z 1310364 ']' 00:16:49.713 16:07:20 -- common/autotest_common.sh@940 -- # kill -0 1310364 00:16:49.713 16:07:20 -- common/autotest_common.sh@941 -- # uname 00:16:49.972 16:07:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:16:49.972 16:07:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1310364 00:16:49.972 16:07:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:16:49.972 16:07:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:16:49.972 16:07:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1310364' 00:16:49.972 killing process with pid 1310364 00:16:49.972 16:07:20 -- common/autotest_common.sh@955 -- # kill 1310364 00:16:49.972 16:07:20 -- common/autotest_common.sh@960 -- # wait 1310364 00:16:50.232 16:07:20 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:16:50.232 16:07:20 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:16:50.232 00:16:50.232 real 0m48.095s 00:16:50.232 user 3m18.699s 00:16:50.232 sys 0m13.462s 00:16:50.232 16:07:20 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:16:50.232 16:07:20 -- common/autotest_common.sh@10 -- # set +x 00:16:50.232 ************************************ 00:16:50.232 END TEST nvmf_ns_hotplug_stress 00:16:50.232 ************************************ 00:16:50.232 16:07:20 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:50.232 16:07:20 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:16:50.232 16:07:20 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:50.232 16:07:20 -- common/autotest_common.sh@10 -- # set +x 00:16:50.232 ************************************ 00:16:50.232 START TEST nvmf_connect_stress 00:16:50.232 ************************************ 00:16:50.232 16:07:20 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=rdma 00:16:50.232 * Looking for test storage... 00:16:50.232 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:16:50.232 16:07:20 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:16:50.232 16:07:20 -- common/autotest_common.sh@1690 -- # lcov --version 00:16:50.232 16:07:20 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:16:50.232 16:07:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:16:50.232 16:07:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:16:50.232 16:07:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:16:50.232 16:07:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:16:50.232 16:07:21 -- scripts/common.sh@335 -- # IFS=.-: 00:16:50.232 16:07:21 -- scripts/common.sh@335 -- # read -ra ver1 00:16:50.232 16:07:21 -- scripts/common.sh@336 -- # IFS=.-: 00:16:50.232 16:07:21 -- scripts/common.sh@336 -- # read -ra ver2 00:16:50.232 16:07:21 -- scripts/common.sh@337 -- # local 'op=<' 00:16:50.232 16:07:21 -- scripts/common.sh@339 -- # ver1_l=2 00:16:50.232 16:07:21 -- scripts/common.sh@340 -- # ver2_l=1 00:16:50.232 16:07:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:16:50.232 16:07:21 -- scripts/common.sh@343 -- # case "$op" in 00:16:50.232 16:07:21 -- scripts/common.sh@344 -- # : 1 00:16:50.232 16:07:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:16:50.232 16:07:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:50.232 16:07:21 -- scripts/common.sh@364 -- # decimal 1 00:16:50.232 16:07:21 -- scripts/common.sh@352 -- # local d=1 00:16:50.492 16:07:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:50.492 16:07:21 -- scripts/common.sh@354 -- # echo 1 00:16:50.492 16:07:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:16:50.492 16:07:21 -- scripts/common.sh@365 -- # decimal 2 00:16:50.492 16:07:21 -- scripts/common.sh@352 -- # local d=2 00:16:50.492 16:07:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:50.492 16:07:21 -- scripts/common.sh@354 -- # echo 2 00:16:50.492 16:07:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:16:50.492 16:07:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:16:50.492 16:07:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:16:50.492 16:07:21 -- scripts/common.sh@367 -- # return 0 00:16:50.492 16:07:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:50.492 16:07:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:16:50.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.492 --rc genhtml_branch_coverage=1 00:16:50.492 --rc genhtml_function_coverage=1 00:16:50.492 --rc genhtml_legend=1 00:16:50.492 --rc geninfo_all_blocks=1 00:16:50.492 --rc geninfo_unexecuted_blocks=1 00:16:50.492 00:16:50.492 ' 00:16:50.492 16:07:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:16:50.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.492 --rc genhtml_branch_coverage=1 00:16:50.492 --rc genhtml_function_coverage=1 00:16:50.492 --rc genhtml_legend=1 00:16:50.492 --rc geninfo_all_blocks=1 00:16:50.492 --rc geninfo_unexecuted_blocks=1 00:16:50.492 00:16:50.492 ' 00:16:50.492 16:07:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:16:50.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.492 --rc genhtml_branch_coverage=1 00:16:50.492 --rc genhtml_function_coverage=1 00:16:50.492 --rc genhtml_legend=1 00:16:50.492 --rc geninfo_all_blocks=1 00:16:50.492 --rc geninfo_unexecuted_blocks=1 00:16:50.492 00:16:50.492 ' 00:16:50.492 16:07:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:16:50.492 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:50.492 --rc genhtml_branch_coverage=1 00:16:50.492 --rc genhtml_function_coverage=1 00:16:50.492 --rc genhtml_legend=1 00:16:50.492 --rc geninfo_all_blocks=1 00:16:50.492 --rc geninfo_unexecuted_blocks=1 00:16:50.492 00:16:50.492 ' 00:16:50.492 16:07:21 -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:16:50.492 16:07:21 -- nvmf/common.sh@7 -- # uname -s 00:16:50.492 16:07:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:50.492 16:07:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:50.492 16:07:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:50.492 16:07:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:50.492 16:07:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:50.492 16:07:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:50.492 16:07:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:50.492 16:07:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:50.492 16:07:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:50.492 16:07:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:50.492 16:07:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:16:50.492 16:07:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:16:50.492 16:07:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:50.492 16:07:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:50.492 16:07:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:50.492 16:07:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:16:50.492 16:07:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:50.492 16:07:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:50.492 16:07:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:50.492 16:07:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.492 16:07:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.492 16:07:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.492 16:07:21 -- paths/export.sh@5 -- # export PATH 00:16:50.492 16:07:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:50.492 16:07:21 -- nvmf/common.sh@46 -- # : 0 00:16:50.492 16:07:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:16:50.492 16:07:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:16:50.492 16:07:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:16:50.492 16:07:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:50.492 16:07:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:50.492 16:07:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:16:50.492 16:07:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:16:50.492 16:07:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:16:50.492 16:07:21 -- target/connect_stress.sh@12 -- # nvmftestinit 00:16:50.492 16:07:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:16:50.492 16:07:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:50.492 16:07:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:16:50.492 16:07:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:16:50.492 16:07:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:16:50.492 16:07:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:50.493 16:07:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:50.493 16:07:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:50.493 16:07:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:16:50.493 16:07:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:16:50.493 16:07:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:16:50.493 16:07:21 -- common/autotest_common.sh@10 -- # set +x 00:16:57.070 16:07:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:16:57.070 16:07:27 -- nvmf/common.sh@290 -- # pci_devs=() 00:16:57.070 16:07:27 -- nvmf/common.sh@290 -- # local -a pci_devs 00:16:57.070 16:07:27 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:16:57.070 16:07:27 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:16:57.070 16:07:27 -- nvmf/common.sh@292 -- # pci_drivers=() 00:16:57.070 16:07:27 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:16:57.070 16:07:27 -- nvmf/common.sh@294 -- # net_devs=() 00:16:57.070 16:07:27 -- nvmf/common.sh@294 -- # local -ga net_devs 00:16:57.070 16:07:27 -- nvmf/common.sh@295 -- # e810=() 00:16:57.070 16:07:27 -- nvmf/common.sh@295 -- # local -ga e810 00:16:57.070 16:07:27 -- nvmf/common.sh@296 -- # x722=() 00:16:57.070 16:07:27 -- nvmf/common.sh@296 -- # local -ga x722 00:16:57.070 16:07:27 -- nvmf/common.sh@297 -- # mlx=() 00:16:57.070 16:07:27 -- nvmf/common.sh@297 -- # local -ga mlx 00:16:57.070 16:07:27 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:57.070 16:07:27 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:16:57.070 16:07:27 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:16:57.070 16:07:27 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:16:57.070 16:07:27 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:16:57.071 16:07:27 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:16:57.071 16:07:27 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:16:57.071 16:07:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:16:57.071 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:16:57.071 16:07:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.071 16:07:27 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:16:57.071 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:16:57.071 16:07:27 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:16:57.071 16:07:27 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:16:57.071 16:07:27 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.071 16:07:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:57.071 16:07:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.071 16:07:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:16:57.071 Found net devices under 0000:d9:00.0: mlx_0_0 00:16:57.071 16:07:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.071 16:07:27 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:57.071 16:07:27 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:16:57.071 16:07:27 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:57.071 16:07:27 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:16:57.071 Found net devices under 0000:d9:00.1: mlx_0_1 00:16:57.071 16:07:27 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:16:57.071 16:07:27 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:16:57.071 16:07:27 -- nvmf/common.sh@402 -- # is_hw=yes 00:16:57.071 16:07:27 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@408 -- # rdma_device_init 00:16:57.071 16:07:27 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:16:57.071 16:07:27 -- nvmf/common.sh@57 -- # uname 00:16:57.071 16:07:27 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:16:57.071 16:07:27 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:16:57.071 16:07:27 -- nvmf/common.sh@62 -- # modprobe ib_core 00:16:57.071 16:07:27 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:16:57.071 16:07:27 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:16:57.071 16:07:27 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:16:57.071 16:07:27 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:16:57.071 16:07:27 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:16:57.071 16:07:27 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:16:57.071 16:07:27 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:16:57.071 16:07:27 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:16:57.071 16:07:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.071 16:07:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:57.071 16:07:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:57.071 16:07:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.071 16:07:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:57.071 16:07:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:57.071 16:07:27 -- nvmf/common.sh@104 -- # continue 2 00:16:57.071 16:07:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:57.071 16:07:27 -- nvmf/common.sh@104 -- # continue 2 00:16:57.071 16:07:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:57.071 16:07:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:16:57.071 16:07:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:57.071 16:07:27 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:16:57.071 16:07:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:16:57.071 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.071 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:16:57.071 altname enp217s0f0np0 00:16:57.071 altname ens818f0np0 00:16:57.071 inet 192.168.100.8/24 scope global mlx_0_0 00:16:57.071 valid_lft forever preferred_lft forever 00:16:57.071 16:07:27 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:16:57.071 16:07:27 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:16:57.071 16:07:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:57.071 16:07:27 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:16:57.071 16:07:27 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:16:57.071 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:16:57.071 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:16:57.071 altname enp217s0f1np1 00:16:57.071 altname ens818f1np1 00:16:57.071 inet 192.168.100.9/24 scope global mlx_0_1 00:16:57.071 valid_lft forever preferred_lft forever 00:16:57.071 16:07:27 -- nvmf/common.sh@410 -- # return 0 00:16:57.071 16:07:27 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:16:57.071 16:07:27 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:16:57.071 16:07:27 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:16:57.071 16:07:27 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:16:57.071 16:07:27 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:16:57.071 16:07:27 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:16:57.071 16:07:27 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:16:57.071 16:07:27 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:16:57.071 16:07:27 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:16:57.071 16:07:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:16:57.071 16:07:27 -- nvmf/common.sh@104 -- # continue 2 00:16:57.071 16:07:27 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:16:57.071 16:07:27 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:16:57.071 16:07:27 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:16:57.071 16:07:27 -- nvmf/common.sh@104 -- # continue 2 00:16:57.071 16:07:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:57.071 16:07:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:16:57.071 16:07:27 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:57.071 16:07:27 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:16:57.071 16:07:27 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:16:57.071 16:07:27 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:16:57.071 16:07:27 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:16:57.071 16:07:27 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:16:57.071 192.168.100.9' 00:16:57.071 16:07:27 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:16:57.071 192.168.100.9' 00:16:57.071 16:07:27 -- nvmf/common.sh@445 -- # head -n 1 00:16:57.071 16:07:27 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:16:57.071 16:07:27 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:16:57.071 192.168.100.9' 00:16:57.071 16:07:27 -- nvmf/common.sh@446 -- # tail -n +2 00:16:57.071 16:07:27 -- nvmf/common.sh@446 -- # head -n 1 00:16:57.071 16:07:27 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:16:57.071 16:07:27 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:16:57.071 16:07:27 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:16:57.071 16:07:27 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:16:57.071 16:07:27 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:16:57.071 16:07:27 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:16:57.332 16:07:27 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:16:57.332 16:07:27 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:16:57.332 16:07:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:57.332 16:07:27 -- common/autotest_common.sh@10 -- # set +x 00:16:57.332 16:07:27 -- nvmf/common.sh@469 -- # nvmfpid=1321252 00:16:57.332 16:07:27 -- nvmf/common.sh@470 -- # waitforlisten 1321252 00:16:57.332 16:07:27 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:16:57.332 16:07:27 -- common/autotest_common.sh@829 -- # '[' -z 1321252 ']' 00:16:57.332 16:07:27 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:57.332 16:07:27 -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:57.332 16:07:27 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:57.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:57.332 16:07:27 -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:57.332 16:07:27 -- common/autotest_common.sh@10 -- # set +x 00:16:57.332 [2024-11-20 16:07:27.933762] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:16:57.332 [2024-11-20 16:07:27.933820] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:57.332 EAL: No free 2048 kB hugepages reported on node 1 00:16:57.332 [2024-11-20 16:07:28.005515] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:57.332 [2024-11-20 16:07:28.044146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:16:57.332 [2024-11-20 16:07:28.044261] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:57.332 [2024-11-20 16:07:28.044272] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:57.332 [2024-11-20 16:07:28.044281] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:57.332 [2024-11-20 16:07:28.044393] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:16:57.332 [2024-11-20 16:07:28.044474] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:16:57.332 [2024-11-20 16:07:28.044476] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:16:58.271 16:07:28 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:58.271 16:07:28 -- common/autotest_common.sh@862 -- # return 0 00:16:58.271 16:07:28 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:16:58.271 16:07:28 -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:58.271 16:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.271 16:07:28 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:58.271 16:07:28 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:16:58.271 16:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.271 16:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.271 [2024-11-20 16:07:28.824302] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x21ae900/0x21b2db0) succeed. 00:16:58.271 [2024-11-20 16:07:28.833202] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x21afe00/0x21f4450) succeed. 00:16:58.271 16:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.271 16:07:28 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:58.271 16:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.271 16:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.271 16:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.271 16:07:28 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:16:58.271 16:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.271 16:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.271 [2024-11-20 16:07:28.951740] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:16:58.271 16:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.271 16:07:28 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:16:58.271 16:07:28 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.271 16:07:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.271 NULL1 00:16:58.271 16:07:28 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.271 16:07:28 -- target/connect_stress.sh@21 -- # PERF_PID=1321380 00:16:58.271 16:07:28 -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:16:58.271 16:07:28 -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:58.271 16:07:28 -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:16:58.271 16:07:28 -- target/connect_stress.sh@27 -- # seq 1 20 00:16:58.271 16:07:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:28 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:28 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:28 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:28 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 EAL: No free 2048 kB hugepages reported on node 1 00:16:58.271 16:07:28 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:28 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.271 16:07:29 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:16:58.271 16:07:29 -- target/connect_stress.sh@28 -- # cat 00:16:58.542 16:07:29 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:16:58.542 16:07:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.542 16:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.542 16:07:29 -- common/autotest_common.sh@10 -- # set +x 00:16:58.805 16:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.805 16:07:29 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:16:58.805 16:07:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:58.805 16:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.805 16:07:29 -- common/autotest_common.sh@10 -- # set +x 00:16:59.065 16:07:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.065 16:07:29 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:16:59.065 16:07:29 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.065 16:07:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.065 16:07:29 -- common/autotest_common.sh@10 -- # set +x 00:16:59.324 16:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.324 16:07:30 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:16:59.324 16:07:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.324 16:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.324 16:07:30 -- common/autotest_common.sh@10 -- # set +x 00:16:59.583 16:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.583 16:07:30 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:16:59.583 16:07:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:16:59.583 16:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.583 16:07:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.151 16:07:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.151 16:07:30 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:00.151 16:07:30 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.151 16:07:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.151 16:07:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.411 16:07:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.411 16:07:31 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:00.411 16:07:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.411 16:07:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.411 16:07:31 -- common/autotest_common.sh@10 -- # set +x 00:17:00.670 16:07:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.671 16:07:31 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:00.671 16:07:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.671 16:07:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.671 16:07:31 -- common/autotest_common.sh@10 -- # set +x 00:17:00.929 16:07:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.929 16:07:31 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:00.929 16:07:31 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:00.929 16:07:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.929 16:07:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.497 16:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.497 16:07:32 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:01.497 16:07:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.497 16:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.497 16:07:32 -- common/autotest_common.sh@10 -- # set +x 00:17:01.757 16:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.757 16:07:32 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:01.757 16:07:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:01.757 16:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.757 16:07:32 -- common/autotest_common.sh@10 -- # set +x 00:17:02.016 16:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.016 16:07:32 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:02.016 16:07:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.016 16:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.016 16:07:32 -- common/autotest_common.sh@10 -- # set +x 00:17:02.275 16:07:32 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.275 16:07:32 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:02.275 16:07:32 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.275 16:07:32 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.275 16:07:32 -- common/autotest_common.sh@10 -- # set +x 00:17:02.535 16:07:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.535 16:07:33 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:02.535 16:07:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:02.535 16:07:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.535 16:07:33 -- common/autotest_common.sh@10 -- # set +x 00:17:03.105 16:07:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.105 16:07:33 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:03.105 16:07:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.105 16:07:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.105 16:07:33 -- common/autotest_common.sh@10 -- # set +x 00:17:03.363 16:07:33 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.363 16:07:33 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:03.363 16:07:33 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.363 16:07:33 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.363 16:07:33 -- common/autotest_common.sh@10 -- # set +x 00:17:03.623 16:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.623 16:07:34 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:03.623 16:07:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.623 16:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.623 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:17:03.882 16:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.882 16:07:34 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:03.882 16:07:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:03.882 16:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.882 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:17:04.141 16:07:34 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.141 16:07:34 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:04.141 16:07:34 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.141 16:07:34 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.141 16:07:34 -- common/autotest_common.sh@10 -- # set +x 00:17:04.710 16:07:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.710 16:07:35 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:04.710 16:07:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.710 16:07:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.710 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:17:04.969 16:07:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.969 16:07:35 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:04.969 16:07:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:04.969 16:07:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.969 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:17:05.232 16:07:35 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.232 16:07:35 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:05.232 16:07:35 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.232 16:07:35 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.232 16:07:35 -- common/autotest_common.sh@10 -- # set +x 00:17:05.617 16:07:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.617 16:07:36 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:05.617 16:07:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.617 16:07:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.617 16:07:36 -- common/autotest_common.sh@10 -- # set +x 00:17:05.876 16:07:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.876 16:07:36 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:05.876 16:07:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:05.876 16:07:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.876 16:07:36 -- common/autotest_common.sh@10 -- # set +x 00:17:06.134 16:07:36 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.134 16:07:36 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:06.134 16:07:36 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.134 16:07:36 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.134 16:07:36 -- common/autotest_common.sh@10 -- # set +x 00:17:06.701 16:07:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.701 16:07:37 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:06.701 16:07:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.701 16:07:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.701 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:17:06.960 16:07:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.960 16:07:37 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:06.960 16:07:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:06.960 16:07:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.960 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.218 16:07:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.218 16:07:37 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:07.218 16:07:37 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.218 16:07:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.218 16:07:37 -- common/autotest_common.sh@10 -- # set +x 00:17:07.477 16:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.477 16:07:38 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:07.477 16:07:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.477 16:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.477 16:07:38 -- common/autotest_common.sh@10 -- # set +x 00:17:07.736 16:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.736 16:07:38 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:07.736 16:07:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:07.736 16:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.736 16:07:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.303 16:07:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.303 16:07:38 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:08.303 16:07:38 -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:08.303 16:07:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.303 16:07:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.561 Testing NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:08.561 16:07:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.561 16:07:39 -- target/connect_stress.sh@34 -- # kill -0 1321380 00:17:08.561 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1321380) - No such process 00:17:08.561 16:07:39 -- target/connect_stress.sh@38 -- # wait 1321380 00:17:08.561 16:07:39 -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:08.561 16:07:39 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:08.561 16:07:39 -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:08.561 16:07:39 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:08.561 16:07:39 -- nvmf/common.sh@116 -- # sync 00:17:08.561 16:07:39 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:08.561 16:07:39 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:08.561 16:07:39 -- nvmf/common.sh@119 -- # set +e 00:17:08.561 16:07:39 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:08.561 16:07:39 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:08.561 rmmod nvme_rdma 00:17:08.561 rmmod nvme_fabrics 00:17:08.561 16:07:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:08.561 16:07:39 -- nvmf/common.sh@123 -- # set -e 00:17:08.561 16:07:39 -- nvmf/common.sh@124 -- # return 0 00:17:08.561 16:07:39 -- nvmf/common.sh@477 -- # '[' -n 1321252 ']' 00:17:08.561 16:07:39 -- nvmf/common.sh@478 -- # killprocess 1321252 00:17:08.561 16:07:39 -- common/autotest_common.sh@936 -- # '[' -z 1321252 ']' 00:17:08.561 16:07:39 -- common/autotest_common.sh@940 -- # kill -0 1321252 00:17:08.561 16:07:39 -- common/autotest_common.sh@941 -- # uname 00:17:08.561 16:07:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.561 16:07:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1321252 00:17:08.561 16:07:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:08.561 16:07:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:08.561 16:07:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1321252' 00:17:08.561 killing process with pid 1321252 00:17:08.562 16:07:39 -- common/autotest_common.sh@955 -- # kill 1321252 00:17:08.562 16:07:39 -- common/autotest_common.sh@960 -- # wait 1321252 00:17:08.820 16:07:39 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:08.820 16:07:39 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:08.820 00:17:08.820 real 0m18.680s 00:17:08.820 user 0m41.885s 00:17:08.820 sys 0m7.801s 00:17:08.820 16:07:39 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:08.820 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.820 ************************************ 00:17:08.820 END TEST nvmf_connect_stress 00:17:08.820 ************************************ 00:17:08.820 16:07:39 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:08.820 16:07:39 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:08.820 16:07:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.820 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:08.820 ************************************ 00:17:08.820 START TEST nvmf_fused_ordering 00:17:08.820 ************************************ 00:17:08.820 16:07:39 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=rdma 00:17:09.080 * Looking for test storage... 00:17:09.080 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:09.080 16:07:39 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:09.080 16:07:39 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:09.080 16:07:39 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:09.080 16:07:39 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:09.080 16:07:39 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:09.080 16:07:39 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:09.080 16:07:39 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:09.080 16:07:39 -- scripts/common.sh@335 -- # IFS=.-: 00:17:09.080 16:07:39 -- scripts/common.sh@335 -- # read -ra ver1 00:17:09.080 16:07:39 -- scripts/common.sh@336 -- # IFS=.-: 00:17:09.080 16:07:39 -- scripts/common.sh@336 -- # read -ra ver2 00:17:09.080 16:07:39 -- scripts/common.sh@337 -- # local 'op=<' 00:17:09.080 16:07:39 -- scripts/common.sh@339 -- # ver1_l=2 00:17:09.080 16:07:39 -- scripts/common.sh@340 -- # ver2_l=1 00:17:09.080 16:07:39 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:09.080 16:07:39 -- scripts/common.sh@343 -- # case "$op" in 00:17:09.080 16:07:39 -- scripts/common.sh@344 -- # : 1 00:17:09.080 16:07:39 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:09.080 16:07:39 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:09.080 16:07:39 -- scripts/common.sh@364 -- # decimal 1 00:17:09.080 16:07:39 -- scripts/common.sh@352 -- # local d=1 00:17:09.080 16:07:39 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:09.080 16:07:39 -- scripts/common.sh@354 -- # echo 1 00:17:09.080 16:07:39 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:09.080 16:07:39 -- scripts/common.sh@365 -- # decimal 2 00:17:09.080 16:07:39 -- scripts/common.sh@352 -- # local d=2 00:17:09.080 16:07:39 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:09.080 16:07:39 -- scripts/common.sh@354 -- # echo 2 00:17:09.080 16:07:39 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:09.080 16:07:39 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:09.080 16:07:39 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:09.080 16:07:39 -- scripts/common.sh@367 -- # return 0 00:17:09.080 16:07:39 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:09.080 16:07:39 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:09.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.080 --rc genhtml_branch_coverage=1 00:17:09.080 --rc genhtml_function_coverage=1 00:17:09.080 --rc genhtml_legend=1 00:17:09.080 --rc geninfo_all_blocks=1 00:17:09.080 --rc geninfo_unexecuted_blocks=1 00:17:09.080 00:17:09.080 ' 00:17:09.080 16:07:39 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:09.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.080 --rc genhtml_branch_coverage=1 00:17:09.080 --rc genhtml_function_coverage=1 00:17:09.081 --rc genhtml_legend=1 00:17:09.081 --rc geninfo_all_blocks=1 00:17:09.081 --rc geninfo_unexecuted_blocks=1 00:17:09.081 00:17:09.081 ' 00:17:09.081 16:07:39 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:09.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.081 --rc genhtml_branch_coverage=1 00:17:09.081 --rc genhtml_function_coverage=1 00:17:09.081 --rc genhtml_legend=1 00:17:09.081 --rc geninfo_all_blocks=1 00:17:09.081 --rc geninfo_unexecuted_blocks=1 00:17:09.081 00:17:09.081 ' 00:17:09.081 16:07:39 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:09.081 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:09.081 --rc genhtml_branch_coverage=1 00:17:09.081 --rc genhtml_function_coverage=1 00:17:09.081 --rc genhtml_legend=1 00:17:09.081 --rc geninfo_all_blocks=1 00:17:09.081 --rc geninfo_unexecuted_blocks=1 00:17:09.081 00:17:09.081 ' 00:17:09.081 16:07:39 -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:09.081 16:07:39 -- nvmf/common.sh@7 -- # uname -s 00:17:09.081 16:07:39 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:09.081 16:07:39 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:09.081 16:07:39 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:09.081 16:07:39 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:09.081 16:07:39 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:09.081 16:07:39 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:09.081 16:07:39 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:09.081 16:07:39 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:09.081 16:07:39 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:09.081 16:07:39 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:09.081 16:07:39 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:09.081 16:07:39 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:09.081 16:07:39 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:09.081 16:07:39 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:09.081 16:07:39 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:09.081 16:07:39 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:09.081 16:07:39 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:09.081 16:07:39 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:09.081 16:07:39 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:09.081 16:07:39 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.081 16:07:39 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.081 16:07:39 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.081 16:07:39 -- paths/export.sh@5 -- # export PATH 00:17:09.081 16:07:39 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:09.081 16:07:39 -- nvmf/common.sh@46 -- # : 0 00:17:09.081 16:07:39 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:09.081 16:07:39 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:09.081 16:07:39 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:09.081 16:07:39 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:09.081 16:07:39 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:09.081 16:07:39 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:09.081 16:07:39 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:09.081 16:07:39 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:09.081 16:07:39 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:09.081 16:07:39 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:09.081 16:07:39 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:09.081 16:07:39 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:09.081 16:07:39 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:09.081 16:07:39 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:09.081 16:07:39 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.081 16:07:39 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:09.081 16:07:39 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:09.081 16:07:39 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:09.081 16:07:39 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:09.081 16:07:39 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:09.081 16:07:39 -- common/autotest_common.sh@10 -- # set +x 00:17:15.657 16:07:46 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:15.657 16:07:46 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:15.657 16:07:46 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:15.657 16:07:46 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:15.657 16:07:46 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:15.657 16:07:46 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:15.657 16:07:46 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:15.657 16:07:46 -- nvmf/common.sh@294 -- # net_devs=() 00:17:15.657 16:07:46 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:15.657 16:07:46 -- nvmf/common.sh@295 -- # e810=() 00:17:15.657 16:07:46 -- nvmf/common.sh@295 -- # local -ga e810 00:17:15.657 16:07:46 -- nvmf/common.sh@296 -- # x722=() 00:17:15.657 16:07:46 -- nvmf/common.sh@296 -- # local -ga x722 00:17:15.657 16:07:46 -- nvmf/common.sh@297 -- # mlx=() 00:17:15.657 16:07:46 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:15.657 16:07:46 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:15.657 16:07:46 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:15.657 16:07:46 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:15.657 16:07:46 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:15.657 16:07:46 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:15.657 16:07:46 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:15.657 16:07:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:15.657 16:07:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:15.657 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:15.657 16:07:46 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:15.657 16:07:46 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:15.657 16:07:46 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:15.657 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:15.657 16:07:46 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:15.657 16:07:46 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:15.657 16:07:46 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:15.657 16:07:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.657 16:07:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:15.657 16:07:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.657 16:07:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:15.657 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:15.657 16:07:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.657 16:07:46 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:15.657 16:07:46 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:15.657 16:07:46 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:15.657 16:07:46 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:15.657 16:07:46 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:15.657 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:15.657 16:07:46 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:15.657 16:07:46 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:15.657 16:07:46 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:15.657 16:07:46 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:15.657 16:07:46 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:15.657 16:07:46 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:15.657 16:07:46 -- nvmf/common.sh@57 -- # uname 00:17:15.657 16:07:46 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:15.657 16:07:46 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:15.657 16:07:46 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:15.657 16:07:46 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:15.657 16:07:46 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:15.657 16:07:46 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:15.657 16:07:46 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:15.657 16:07:46 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:15.657 16:07:46 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:15.657 16:07:46 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:15.657 16:07:46 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:15.657 16:07:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:15.657 16:07:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:15.657 16:07:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:15.658 16:07:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:15.658 16:07:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:15.658 16:07:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:15.658 16:07:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:15.658 16:07:46 -- nvmf/common.sh@104 -- # continue 2 00:17:15.658 16:07:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:15.658 16:07:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:15.658 16:07:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:15.658 16:07:46 -- nvmf/common.sh@104 -- # continue 2 00:17:15.658 16:07:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:15.658 16:07:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:15.658 16:07:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:15.658 16:07:46 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:15.658 16:07:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:15.658 16:07:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:15.658 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:15.658 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:15.658 altname enp217s0f0np0 00:17:15.658 altname ens818f0np0 00:17:15.658 inet 192.168.100.8/24 scope global mlx_0_0 00:17:15.658 valid_lft forever preferred_lft forever 00:17:15.658 16:07:46 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:15.658 16:07:46 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:15.658 16:07:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:15.658 16:07:46 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:15.658 16:07:46 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:15.658 16:07:46 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:15.658 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:15.658 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:15.658 altname enp217s0f1np1 00:17:15.658 altname ens818f1np1 00:17:15.658 inet 192.168.100.9/24 scope global mlx_0_1 00:17:15.658 valid_lft forever preferred_lft forever 00:17:15.658 16:07:46 -- nvmf/common.sh@410 -- # return 0 00:17:15.658 16:07:46 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:15.658 16:07:46 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:15.658 16:07:46 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:15.658 16:07:46 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:15.658 16:07:46 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:15.658 16:07:46 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:15.658 16:07:46 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:15.658 16:07:46 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:15.658 16:07:46 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:15.658 16:07:46 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:15.658 16:07:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:15.658 16:07:46 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:15.658 16:07:46 -- nvmf/common.sh@104 -- # continue 2 00:17:15.658 16:07:46 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:15.658 16:07:46 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:15.658 16:07:46 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:15.658 16:07:46 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:15.658 16:07:46 -- nvmf/common.sh@104 -- # continue 2 00:17:15.658 16:07:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:15.658 16:07:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:15.658 16:07:46 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:15.658 16:07:46 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:15.658 16:07:46 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:15.658 16:07:46 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:15.658 16:07:46 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:15.658 16:07:46 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:15.658 192.168.100.9' 00:17:15.658 16:07:46 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:15.658 192.168.100.9' 00:17:15.658 16:07:46 -- nvmf/common.sh@445 -- # head -n 1 00:17:15.658 16:07:46 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:15.658 16:07:46 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:15.658 192.168.100.9' 00:17:15.658 16:07:46 -- nvmf/common.sh@446 -- # tail -n +2 00:17:15.658 16:07:46 -- nvmf/common.sh@446 -- # head -n 1 00:17:15.658 16:07:46 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:15.658 16:07:46 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:15.658 16:07:46 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:15.658 16:07:46 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:15.658 16:07:46 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:15.658 16:07:46 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:15.658 16:07:46 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:15.658 16:07:46 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:15.658 16:07:46 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:15.658 16:07:46 -- common/autotest_common.sh@10 -- # set +x 00:17:15.658 16:07:46 -- nvmf/common.sh@469 -- # nvmfpid=1326469 00:17:15.658 16:07:46 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:15.658 16:07:46 -- nvmf/common.sh@470 -- # waitforlisten 1326469 00:17:15.658 16:07:46 -- common/autotest_common.sh@829 -- # '[' -z 1326469 ']' 00:17:15.658 16:07:46 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.658 16:07:46 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:15.658 16:07:46 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.658 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.658 16:07:46 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:15.658 16:07:46 -- common/autotest_common.sh@10 -- # set +x 00:17:15.658 [2024-11-20 16:07:46.451968] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:15.658 [2024-11-20 16:07:46.452017] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:15.916 EAL: No free 2048 kB hugepages reported on node 1 00:17:15.916 [2024-11-20 16:07:46.523785] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.916 [2024-11-20 16:07:46.559299] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:15.916 [2024-11-20 16:07:46.559412] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:15.916 [2024-11-20 16:07:46.559423] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:15.916 [2024-11-20 16:07:46.559431] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:15.916 [2024-11-20 16:07:46.559458] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.482 16:07:47 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.482 16:07:47 -- common/autotest_common.sh@862 -- # return 0 00:17:16.482 16:07:47 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:16.482 16:07:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.482 16:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.741 16:07:47 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.741 16:07:47 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:16.741 16:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.741 16:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.741 [2024-11-20 16:07:47.333207] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1099550/0x109da00) succeed. 00:17:16.741 [2024-11-20 16:07:47.342202] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x109aa00/0x10df0a0) succeed. 00:17:16.741 16:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.741 16:07:47 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:16.741 16:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.741 16:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.741 16:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.741 16:07:47 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:16.741 16:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.741 16:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.741 [2024-11-20 16:07:47.402919] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:16.741 16:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.741 16:07:47 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:16.741 16:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.741 16:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.741 NULL1 00:17:16.741 16:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.741 16:07:47 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:16.741 16:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.741 16:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.741 16:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.741 16:07:47 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:16.741 16:07:47 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.741 16:07:47 -- common/autotest_common.sh@10 -- # set +x 00:17:16.741 16:07:47 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.742 16:07:47 -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:16.742 [2024-11-20 16:07:47.457684] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:16.742 [2024-11-20 16:07:47.457720] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1326753 ] 00:17:16.742 EAL: No free 2048 kB hugepages reported on node 1 00:17:17.001 Attached to nqn.2016-06.io.spdk:cnode1 00:17:17.001 Namespace ID: 1 size: 1GB 00:17:17.001 fused_ordering(0) 00:17:17.001 fused_ordering(1) 00:17:17.001 fused_ordering(2) 00:17:17.001 fused_ordering(3) 00:17:17.001 fused_ordering(4) 00:17:17.001 fused_ordering(5) 00:17:17.001 fused_ordering(6) 00:17:17.001 fused_ordering(7) 00:17:17.001 fused_ordering(8) 00:17:17.001 fused_ordering(9) 00:17:17.001 fused_ordering(10) 00:17:17.001 fused_ordering(11) 00:17:17.001 fused_ordering(12) 00:17:17.001 fused_ordering(13) 00:17:17.001 fused_ordering(14) 00:17:17.001 fused_ordering(15) 00:17:17.001 fused_ordering(16) 00:17:17.001 fused_ordering(17) 00:17:17.001 fused_ordering(18) 00:17:17.001 fused_ordering(19) 00:17:17.001 fused_ordering(20) 00:17:17.001 fused_ordering(21) 00:17:17.001 fused_ordering(22) 00:17:17.001 fused_ordering(23) 00:17:17.001 fused_ordering(24) 00:17:17.001 fused_ordering(25) 00:17:17.001 fused_ordering(26) 00:17:17.001 fused_ordering(27) 00:17:17.001 fused_ordering(28) 00:17:17.001 fused_ordering(29) 00:17:17.001 fused_ordering(30) 00:17:17.001 fused_ordering(31) 00:17:17.001 fused_ordering(32) 00:17:17.001 fused_ordering(33) 00:17:17.001 fused_ordering(34) 00:17:17.001 fused_ordering(35) 00:17:17.001 fused_ordering(36) 00:17:17.001 fused_ordering(37) 00:17:17.001 fused_ordering(38) 00:17:17.001 fused_ordering(39) 00:17:17.001 fused_ordering(40) 00:17:17.001 fused_ordering(41) 00:17:17.001 fused_ordering(42) 00:17:17.001 fused_ordering(43) 00:17:17.001 fused_ordering(44) 00:17:17.001 fused_ordering(45) 00:17:17.001 fused_ordering(46) 00:17:17.001 fused_ordering(47) 00:17:17.001 fused_ordering(48) 00:17:17.001 fused_ordering(49) 00:17:17.001 fused_ordering(50) 00:17:17.001 fused_ordering(51) 00:17:17.002 fused_ordering(52) 00:17:17.002 fused_ordering(53) 00:17:17.002 fused_ordering(54) 00:17:17.002 fused_ordering(55) 00:17:17.002 fused_ordering(56) 00:17:17.002 fused_ordering(57) 00:17:17.002 fused_ordering(58) 00:17:17.002 fused_ordering(59) 00:17:17.002 fused_ordering(60) 00:17:17.002 fused_ordering(61) 00:17:17.002 fused_ordering(62) 00:17:17.002 fused_ordering(63) 00:17:17.002 fused_ordering(64) 00:17:17.002 fused_ordering(65) 00:17:17.002 fused_ordering(66) 00:17:17.002 fused_ordering(67) 00:17:17.002 fused_ordering(68) 00:17:17.002 fused_ordering(69) 00:17:17.002 fused_ordering(70) 00:17:17.002 fused_ordering(71) 00:17:17.002 fused_ordering(72) 00:17:17.002 fused_ordering(73) 00:17:17.002 fused_ordering(74) 00:17:17.002 fused_ordering(75) 00:17:17.002 fused_ordering(76) 00:17:17.002 fused_ordering(77) 00:17:17.002 fused_ordering(78) 00:17:17.002 fused_ordering(79) 00:17:17.002 fused_ordering(80) 00:17:17.002 fused_ordering(81) 00:17:17.002 fused_ordering(82) 00:17:17.002 fused_ordering(83) 00:17:17.002 fused_ordering(84) 00:17:17.002 fused_ordering(85) 00:17:17.002 fused_ordering(86) 00:17:17.002 fused_ordering(87) 00:17:17.002 fused_ordering(88) 00:17:17.002 fused_ordering(89) 00:17:17.002 fused_ordering(90) 00:17:17.002 fused_ordering(91) 00:17:17.002 fused_ordering(92) 00:17:17.002 fused_ordering(93) 00:17:17.002 fused_ordering(94) 00:17:17.002 fused_ordering(95) 00:17:17.002 fused_ordering(96) 00:17:17.002 fused_ordering(97) 00:17:17.002 fused_ordering(98) 00:17:17.002 fused_ordering(99) 00:17:17.002 fused_ordering(100) 00:17:17.002 fused_ordering(101) 00:17:17.002 fused_ordering(102) 00:17:17.002 fused_ordering(103) 00:17:17.002 fused_ordering(104) 00:17:17.002 fused_ordering(105) 00:17:17.002 fused_ordering(106) 00:17:17.002 fused_ordering(107) 00:17:17.002 fused_ordering(108) 00:17:17.002 fused_ordering(109) 00:17:17.002 fused_ordering(110) 00:17:17.002 fused_ordering(111) 00:17:17.002 fused_ordering(112) 00:17:17.002 fused_ordering(113) 00:17:17.002 fused_ordering(114) 00:17:17.002 fused_ordering(115) 00:17:17.002 fused_ordering(116) 00:17:17.002 fused_ordering(117) 00:17:17.002 fused_ordering(118) 00:17:17.002 fused_ordering(119) 00:17:17.002 fused_ordering(120) 00:17:17.002 fused_ordering(121) 00:17:17.002 fused_ordering(122) 00:17:17.002 fused_ordering(123) 00:17:17.002 fused_ordering(124) 00:17:17.002 fused_ordering(125) 00:17:17.002 fused_ordering(126) 00:17:17.002 fused_ordering(127) 00:17:17.002 fused_ordering(128) 00:17:17.002 fused_ordering(129) 00:17:17.002 fused_ordering(130) 00:17:17.002 fused_ordering(131) 00:17:17.002 fused_ordering(132) 00:17:17.002 fused_ordering(133) 00:17:17.002 fused_ordering(134) 00:17:17.002 fused_ordering(135) 00:17:17.002 fused_ordering(136) 00:17:17.002 fused_ordering(137) 00:17:17.002 fused_ordering(138) 00:17:17.002 fused_ordering(139) 00:17:17.002 fused_ordering(140) 00:17:17.002 fused_ordering(141) 00:17:17.002 fused_ordering(142) 00:17:17.002 fused_ordering(143) 00:17:17.002 fused_ordering(144) 00:17:17.002 fused_ordering(145) 00:17:17.002 fused_ordering(146) 00:17:17.002 fused_ordering(147) 00:17:17.002 fused_ordering(148) 00:17:17.002 fused_ordering(149) 00:17:17.002 fused_ordering(150) 00:17:17.002 fused_ordering(151) 00:17:17.002 fused_ordering(152) 00:17:17.002 fused_ordering(153) 00:17:17.002 fused_ordering(154) 00:17:17.002 fused_ordering(155) 00:17:17.002 fused_ordering(156) 00:17:17.002 fused_ordering(157) 00:17:17.002 fused_ordering(158) 00:17:17.002 fused_ordering(159) 00:17:17.002 fused_ordering(160) 00:17:17.002 fused_ordering(161) 00:17:17.002 fused_ordering(162) 00:17:17.002 fused_ordering(163) 00:17:17.002 fused_ordering(164) 00:17:17.002 fused_ordering(165) 00:17:17.002 fused_ordering(166) 00:17:17.002 fused_ordering(167) 00:17:17.002 fused_ordering(168) 00:17:17.002 fused_ordering(169) 00:17:17.002 fused_ordering(170) 00:17:17.002 fused_ordering(171) 00:17:17.002 fused_ordering(172) 00:17:17.002 fused_ordering(173) 00:17:17.002 fused_ordering(174) 00:17:17.002 fused_ordering(175) 00:17:17.002 fused_ordering(176) 00:17:17.002 fused_ordering(177) 00:17:17.002 fused_ordering(178) 00:17:17.002 fused_ordering(179) 00:17:17.002 fused_ordering(180) 00:17:17.002 fused_ordering(181) 00:17:17.002 fused_ordering(182) 00:17:17.002 fused_ordering(183) 00:17:17.002 fused_ordering(184) 00:17:17.002 fused_ordering(185) 00:17:17.002 fused_ordering(186) 00:17:17.002 fused_ordering(187) 00:17:17.002 fused_ordering(188) 00:17:17.002 fused_ordering(189) 00:17:17.002 fused_ordering(190) 00:17:17.002 fused_ordering(191) 00:17:17.002 fused_ordering(192) 00:17:17.002 fused_ordering(193) 00:17:17.002 fused_ordering(194) 00:17:17.002 fused_ordering(195) 00:17:17.002 fused_ordering(196) 00:17:17.002 fused_ordering(197) 00:17:17.002 fused_ordering(198) 00:17:17.002 fused_ordering(199) 00:17:17.002 fused_ordering(200) 00:17:17.002 fused_ordering(201) 00:17:17.002 fused_ordering(202) 00:17:17.002 fused_ordering(203) 00:17:17.002 fused_ordering(204) 00:17:17.003 fused_ordering(205) 00:17:17.003 fused_ordering(206) 00:17:17.003 fused_ordering(207) 00:17:17.003 fused_ordering(208) 00:17:17.003 fused_ordering(209) 00:17:17.003 fused_ordering(210) 00:17:17.003 fused_ordering(211) 00:17:17.003 fused_ordering(212) 00:17:17.003 fused_ordering(213) 00:17:17.003 fused_ordering(214) 00:17:17.003 fused_ordering(215) 00:17:17.003 fused_ordering(216) 00:17:17.003 fused_ordering(217) 00:17:17.003 fused_ordering(218) 00:17:17.003 fused_ordering(219) 00:17:17.003 fused_ordering(220) 00:17:17.003 fused_ordering(221) 00:17:17.003 fused_ordering(222) 00:17:17.003 fused_ordering(223) 00:17:17.003 fused_ordering(224) 00:17:17.003 fused_ordering(225) 00:17:17.003 fused_ordering(226) 00:17:17.003 fused_ordering(227) 00:17:17.003 fused_ordering(228) 00:17:17.003 fused_ordering(229) 00:17:17.003 fused_ordering(230) 00:17:17.003 fused_ordering(231) 00:17:17.003 fused_ordering(232) 00:17:17.003 fused_ordering(233) 00:17:17.003 fused_ordering(234) 00:17:17.003 fused_ordering(235) 00:17:17.003 fused_ordering(236) 00:17:17.003 fused_ordering(237) 00:17:17.003 fused_ordering(238) 00:17:17.003 fused_ordering(239) 00:17:17.003 fused_ordering(240) 00:17:17.003 fused_ordering(241) 00:17:17.003 fused_ordering(242) 00:17:17.003 fused_ordering(243) 00:17:17.003 fused_ordering(244) 00:17:17.003 fused_ordering(245) 00:17:17.003 fused_ordering(246) 00:17:17.003 fused_ordering(247) 00:17:17.003 fused_ordering(248) 00:17:17.003 fused_ordering(249) 00:17:17.003 fused_ordering(250) 00:17:17.003 fused_ordering(251) 00:17:17.003 fused_ordering(252) 00:17:17.003 fused_ordering(253) 00:17:17.003 fused_ordering(254) 00:17:17.003 fused_ordering(255) 00:17:17.003 fused_ordering(256) 00:17:17.003 fused_ordering(257) 00:17:17.003 fused_ordering(258) 00:17:17.003 fused_ordering(259) 00:17:17.003 fused_ordering(260) 00:17:17.003 fused_ordering(261) 00:17:17.003 fused_ordering(262) 00:17:17.003 fused_ordering(263) 00:17:17.003 fused_ordering(264) 00:17:17.003 fused_ordering(265) 00:17:17.003 fused_ordering(266) 00:17:17.003 fused_ordering(267) 00:17:17.003 fused_ordering(268) 00:17:17.003 fused_ordering(269) 00:17:17.003 fused_ordering(270) 00:17:17.003 fused_ordering(271) 00:17:17.003 fused_ordering(272) 00:17:17.003 fused_ordering(273) 00:17:17.003 fused_ordering(274) 00:17:17.003 fused_ordering(275) 00:17:17.003 fused_ordering(276) 00:17:17.003 fused_ordering(277) 00:17:17.003 fused_ordering(278) 00:17:17.003 fused_ordering(279) 00:17:17.003 fused_ordering(280) 00:17:17.003 fused_ordering(281) 00:17:17.003 fused_ordering(282) 00:17:17.003 fused_ordering(283) 00:17:17.003 fused_ordering(284) 00:17:17.003 fused_ordering(285) 00:17:17.003 fused_ordering(286) 00:17:17.003 fused_ordering(287) 00:17:17.003 fused_ordering(288) 00:17:17.003 fused_ordering(289) 00:17:17.003 fused_ordering(290) 00:17:17.003 fused_ordering(291) 00:17:17.003 fused_ordering(292) 00:17:17.003 fused_ordering(293) 00:17:17.003 fused_ordering(294) 00:17:17.003 fused_ordering(295) 00:17:17.003 fused_ordering(296) 00:17:17.003 fused_ordering(297) 00:17:17.003 fused_ordering(298) 00:17:17.003 fused_ordering(299) 00:17:17.003 fused_ordering(300) 00:17:17.003 fused_ordering(301) 00:17:17.003 fused_ordering(302) 00:17:17.003 fused_ordering(303) 00:17:17.003 fused_ordering(304) 00:17:17.003 fused_ordering(305) 00:17:17.003 fused_ordering(306) 00:17:17.003 fused_ordering(307) 00:17:17.003 fused_ordering(308) 00:17:17.003 fused_ordering(309) 00:17:17.003 fused_ordering(310) 00:17:17.003 fused_ordering(311) 00:17:17.003 fused_ordering(312) 00:17:17.003 fused_ordering(313) 00:17:17.003 fused_ordering(314) 00:17:17.003 fused_ordering(315) 00:17:17.003 fused_ordering(316) 00:17:17.003 fused_ordering(317) 00:17:17.003 fused_ordering(318) 00:17:17.003 fused_ordering(319) 00:17:17.003 fused_ordering(320) 00:17:17.003 fused_ordering(321) 00:17:17.003 fused_ordering(322) 00:17:17.003 fused_ordering(323) 00:17:17.003 fused_ordering(324) 00:17:17.003 fused_ordering(325) 00:17:17.003 fused_ordering(326) 00:17:17.003 fused_ordering(327) 00:17:17.003 fused_ordering(328) 00:17:17.003 fused_ordering(329) 00:17:17.003 fused_ordering(330) 00:17:17.003 fused_ordering(331) 00:17:17.003 fused_ordering(332) 00:17:17.003 fused_ordering(333) 00:17:17.003 fused_ordering(334) 00:17:17.003 fused_ordering(335) 00:17:17.003 fused_ordering(336) 00:17:17.003 fused_ordering(337) 00:17:17.003 fused_ordering(338) 00:17:17.003 fused_ordering(339) 00:17:17.003 fused_ordering(340) 00:17:17.003 fused_ordering(341) 00:17:17.003 fused_ordering(342) 00:17:17.003 fused_ordering(343) 00:17:17.003 fused_ordering(344) 00:17:17.003 fused_ordering(345) 00:17:17.003 fused_ordering(346) 00:17:17.003 fused_ordering(347) 00:17:17.003 fused_ordering(348) 00:17:17.003 fused_ordering(349) 00:17:17.003 fused_ordering(350) 00:17:17.003 fused_ordering(351) 00:17:17.003 fused_ordering(352) 00:17:17.003 fused_ordering(353) 00:17:17.003 fused_ordering(354) 00:17:17.003 fused_ordering(355) 00:17:17.004 fused_ordering(356) 00:17:17.004 fused_ordering(357) 00:17:17.004 fused_ordering(358) 00:17:17.004 fused_ordering(359) 00:17:17.004 fused_ordering(360) 00:17:17.004 fused_ordering(361) 00:17:17.004 fused_ordering(362) 00:17:17.004 fused_ordering(363) 00:17:17.004 fused_ordering(364) 00:17:17.004 fused_ordering(365) 00:17:17.004 fused_ordering(366) 00:17:17.004 fused_ordering(367) 00:17:17.004 fused_ordering(368) 00:17:17.004 fused_ordering(369) 00:17:17.004 fused_ordering(370) 00:17:17.004 fused_ordering(371) 00:17:17.004 fused_ordering(372) 00:17:17.004 fused_ordering(373) 00:17:17.004 fused_ordering(374) 00:17:17.004 fused_ordering(375) 00:17:17.004 fused_ordering(376) 00:17:17.004 fused_ordering(377) 00:17:17.004 fused_ordering(378) 00:17:17.004 fused_ordering(379) 00:17:17.004 fused_ordering(380) 00:17:17.004 fused_ordering(381) 00:17:17.004 fused_ordering(382) 00:17:17.004 fused_ordering(383) 00:17:17.004 fused_ordering(384) 00:17:17.004 fused_ordering(385) 00:17:17.004 fused_ordering(386) 00:17:17.004 fused_ordering(387) 00:17:17.004 fused_ordering(388) 00:17:17.004 fused_ordering(389) 00:17:17.004 fused_ordering(390) 00:17:17.004 fused_ordering(391) 00:17:17.004 fused_ordering(392) 00:17:17.004 fused_ordering(393) 00:17:17.004 fused_ordering(394) 00:17:17.004 fused_ordering(395) 00:17:17.004 fused_ordering(396) 00:17:17.004 fused_ordering(397) 00:17:17.004 fused_ordering(398) 00:17:17.004 fused_ordering(399) 00:17:17.004 fused_ordering(400) 00:17:17.004 fused_ordering(401) 00:17:17.004 fused_ordering(402) 00:17:17.004 fused_ordering(403) 00:17:17.004 fused_ordering(404) 00:17:17.004 fused_ordering(405) 00:17:17.004 fused_ordering(406) 00:17:17.004 fused_ordering(407) 00:17:17.004 fused_ordering(408) 00:17:17.004 fused_ordering(409) 00:17:17.004 fused_ordering(410) 00:17:17.264 fused_ordering(411) 00:17:17.264 fused_ordering(412) 00:17:17.264 fused_ordering(413) 00:17:17.264 fused_ordering(414) 00:17:17.264 fused_ordering(415) 00:17:17.264 fused_ordering(416) 00:17:17.264 fused_ordering(417) 00:17:17.264 fused_ordering(418) 00:17:17.264 fused_ordering(419) 00:17:17.264 fused_ordering(420) 00:17:17.264 fused_ordering(421) 00:17:17.264 fused_ordering(422) 00:17:17.264 fused_ordering(423) 00:17:17.264 fused_ordering(424) 00:17:17.264 fused_ordering(425) 00:17:17.264 fused_ordering(426) 00:17:17.264 fused_ordering(427) 00:17:17.264 fused_ordering(428) 00:17:17.264 fused_ordering(429) 00:17:17.264 fused_ordering(430) 00:17:17.264 fused_ordering(431) 00:17:17.264 fused_ordering(432) 00:17:17.264 fused_ordering(433) 00:17:17.264 fused_ordering(434) 00:17:17.264 fused_ordering(435) 00:17:17.264 fused_ordering(436) 00:17:17.264 fused_ordering(437) 00:17:17.264 fused_ordering(438) 00:17:17.264 fused_ordering(439) 00:17:17.264 fused_ordering(440) 00:17:17.264 fused_ordering(441) 00:17:17.264 fused_ordering(442) 00:17:17.264 fused_ordering(443) 00:17:17.264 fused_ordering(444) 00:17:17.264 fused_ordering(445) 00:17:17.264 fused_ordering(446) 00:17:17.264 fused_ordering(447) 00:17:17.264 fused_ordering(448) 00:17:17.264 fused_ordering(449) 00:17:17.264 fused_ordering(450) 00:17:17.264 fused_ordering(451) 00:17:17.264 fused_ordering(452) 00:17:17.264 fused_ordering(453) 00:17:17.264 fused_ordering(454) 00:17:17.264 fused_ordering(455) 00:17:17.264 fused_ordering(456) 00:17:17.264 fused_ordering(457) 00:17:17.264 fused_ordering(458) 00:17:17.264 fused_ordering(459) 00:17:17.264 fused_ordering(460) 00:17:17.264 fused_ordering(461) 00:17:17.264 fused_ordering(462) 00:17:17.264 fused_ordering(463) 00:17:17.264 fused_ordering(464) 00:17:17.264 fused_ordering(465) 00:17:17.264 fused_ordering(466) 00:17:17.264 fused_ordering(467) 00:17:17.264 fused_ordering(468) 00:17:17.264 fused_ordering(469) 00:17:17.264 fused_ordering(470) 00:17:17.264 fused_ordering(471) 00:17:17.264 fused_ordering(472) 00:17:17.264 fused_ordering(473) 00:17:17.264 fused_ordering(474) 00:17:17.264 fused_ordering(475) 00:17:17.264 fused_ordering(476) 00:17:17.264 fused_ordering(477) 00:17:17.264 fused_ordering(478) 00:17:17.264 fused_ordering(479) 00:17:17.264 fused_ordering(480) 00:17:17.264 fused_ordering(481) 00:17:17.264 fused_ordering(482) 00:17:17.264 fused_ordering(483) 00:17:17.264 fused_ordering(484) 00:17:17.264 fused_ordering(485) 00:17:17.264 fused_ordering(486) 00:17:17.264 fused_ordering(487) 00:17:17.264 fused_ordering(488) 00:17:17.264 fused_ordering(489) 00:17:17.264 fused_ordering(490) 00:17:17.264 fused_ordering(491) 00:17:17.264 fused_ordering(492) 00:17:17.264 fused_ordering(493) 00:17:17.264 fused_ordering(494) 00:17:17.264 fused_ordering(495) 00:17:17.264 fused_ordering(496) 00:17:17.264 fused_ordering(497) 00:17:17.264 fused_ordering(498) 00:17:17.264 fused_ordering(499) 00:17:17.264 fused_ordering(500) 00:17:17.264 fused_ordering(501) 00:17:17.264 fused_ordering(502) 00:17:17.264 fused_ordering(503) 00:17:17.264 fused_ordering(504) 00:17:17.264 fused_ordering(505) 00:17:17.264 fused_ordering(506) 00:17:17.264 fused_ordering(507) 00:17:17.264 fused_ordering(508) 00:17:17.264 fused_ordering(509) 00:17:17.264 fused_ordering(510) 00:17:17.264 fused_ordering(511) 00:17:17.264 fused_ordering(512) 00:17:17.264 fused_ordering(513) 00:17:17.264 fused_ordering(514) 00:17:17.264 fused_ordering(515) 00:17:17.264 fused_ordering(516) 00:17:17.264 fused_ordering(517) 00:17:17.264 fused_ordering(518) 00:17:17.264 fused_ordering(519) 00:17:17.264 fused_ordering(520) 00:17:17.264 fused_ordering(521) 00:17:17.264 fused_ordering(522) 00:17:17.264 fused_ordering(523) 00:17:17.264 fused_ordering(524) 00:17:17.264 fused_ordering(525) 00:17:17.264 fused_ordering(526) 00:17:17.264 fused_ordering(527) 00:17:17.264 fused_ordering(528) 00:17:17.264 fused_ordering(529) 00:17:17.264 fused_ordering(530) 00:17:17.264 fused_ordering(531) 00:17:17.264 fused_ordering(532) 00:17:17.264 fused_ordering(533) 00:17:17.264 fused_ordering(534) 00:17:17.264 fused_ordering(535) 00:17:17.264 fused_ordering(536) 00:17:17.264 fused_ordering(537) 00:17:17.264 fused_ordering(538) 00:17:17.264 fused_ordering(539) 00:17:17.264 fused_ordering(540) 00:17:17.264 fused_ordering(541) 00:17:17.264 fused_ordering(542) 00:17:17.264 fused_ordering(543) 00:17:17.264 fused_ordering(544) 00:17:17.264 fused_ordering(545) 00:17:17.264 fused_ordering(546) 00:17:17.264 fused_ordering(547) 00:17:17.264 fused_ordering(548) 00:17:17.264 fused_ordering(549) 00:17:17.264 fused_ordering(550) 00:17:17.264 fused_ordering(551) 00:17:17.264 fused_ordering(552) 00:17:17.264 fused_ordering(553) 00:17:17.264 fused_ordering(554) 00:17:17.264 fused_ordering(555) 00:17:17.264 fused_ordering(556) 00:17:17.264 fused_ordering(557) 00:17:17.265 fused_ordering(558) 00:17:17.265 fused_ordering(559) 00:17:17.265 fused_ordering(560) 00:17:17.265 fused_ordering(561) 00:17:17.265 fused_ordering(562) 00:17:17.265 fused_ordering(563) 00:17:17.265 fused_ordering(564) 00:17:17.265 fused_ordering(565) 00:17:17.265 fused_ordering(566) 00:17:17.265 fused_ordering(567) 00:17:17.265 fused_ordering(568) 00:17:17.265 fused_ordering(569) 00:17:17.265 fused_ordering(570) 00:17:17.265 fused_ordering(571) 00:17:17.265 fused_ordering(572) 00:17:17.265 fused_ordering(573) 00:17:17.265 fused_ordering(574) 00:17:17.265 fused_ordering(575) 00:17:17.265 fused_ordering(576) 00:17:17.265 fused_ordering(577) 00:17:17.265 fused_ordering(578) 00:17:17.265 fused_ordering(579) 00:17:17.265 fused_ordering(580) 00:17:17.265 fused_ordering(581) 00:17:17.265 fused_ordering(582) 00:17:17.265 fused_ordering(583) 00:17:17.265 fused_ordering(584) 00:17:17.265 fused_ordering(585) 00:17:17.265 fused_ordering(586) 00:17:17.265 fused_ordering(587) 00:17:17.265 fused_ordering(588) 00:17:17.265 fused_ordering(589) 00:17:17.265 fused_ordering(590) 00:17:17.265 fused_ordering(591) 00:17:17.265 fused_ordering(592) 00:17:17.265 fused_ordering(593) 00:17:17.265 fused_ordering(594) 00:17:17.265 fused_ordering(595) 00:17:17.265 fused_ordering(596) 00:17:17.265 fused_ordering(597) 00:17:17.265 fused_ordering(598) 00:17:17.265 fused_ordering(599) 00:17:17.265 fused_ordering(600) 00:17:17.265 fused_ordering(601) 00:17:17.265 fused_ordering(602) 00:17:17.265 fused_ordering(603) 00:17:17.265 fused_ordering(604) 00:17:17.265 fused_ordering(605) 00:17:17.265 fused_ordering(606) 00:17:17.265 fused_ordering(607) 00:17:17.265 fused_ordering(608) 00:17:17.265 fused_ordering(609) 00:17:17.265 fused_ordering(610) 00:17:17.265 fused_ordering(611) 00:17:17.265 fused_ordering(612) 00:17:17.265 fused_ordering(613) 00:17:17.265 fused_ordering(614) 00:17:17.265 fused_ordering(615) 00:17:17.265 fused_ordering(616) 00:17:17.265 fused_ordering(617) 00:17:17.265 fused_ordering(618) 00:17:17.265 fused_ordering(619) 00:17:17.265 fused_ordering(620) 00:17:17.265 fused_ordering(621) 00:17:17.265 fused_ordering(622) 00:17:17.265 fused_ordering(623) 00:17:17.265 fused_ordering(624) 00:17:17.265 fused_ordering(625) 00:17:17.265 fused_ordering(626) 00:17:17.265 fused_ordering(627) 00:17:17.265 fused_ordering(628) 00:17:17.265 fused_ordering(629) 00:17:17.265 fused_ordering(630) 00:17:17.265 fused_ordering(631) 00:17:17.265 fused_ordering(632) 00:17:17.265 fused_ordering(633) 00:17:17.265 fused_ordering(634) 00:17:17.265 fused_ordering(635) 00:17:17.265 fused_ordering(636) 00:17:17.265 fused_ordering(637) 00:17:17.265 fused_ordering(638) 00:17:17.265 fused_ordering(639) 00:17:17.265 fused_ordering(640) 00:17:17.265 fused_ordering(641) 00:17:17.265 fused_ordering(642) 00:17:17.265 fused_ordering(643) 00:17:17.265 fused_ordering(644) 00:17:17.265 fused_ordering(645) 00:17:17.265 fused_ordering(646) 00:17:17.265 fused_ordering(647) 00:17:17.265 fused_ordering(648) 00:17:17.265 fused_ordering(649) 00:17:17.265 fused_ordering(650) 00:17:17.265 fused_ordering(651) 00:17:17.265 fused_ordering(652) 00:17:17.265 fused_ordering(653) 00:17:17.265 fused_ordering(654) 00:17:17.265 fused_ordering(655) 00:17:17.265 fused_ordering(656) 00:17:17.265 fused_ordering(657) 00:17:17.265 fused_ordering(658) 00:17:17.265 fused_ordering(659) 00:17:17.265 fused_ordering(660) 00:17:17.265 fused_ordering(661) 00:17:17.265 fused_ordering(662) 00:17:17.265 fused_ordering(663) 00:17:17.265 fused_ordering(664) 00:17:17.265 fused_ordering(665) 00:17:17.265 fused_ordering(666) 00:17:17.265 fused_ordering(667) 00:17:17.265 fused_ordering(668) 00:17:17.265 fused_ordering(669) 00:17:17.265 fused_ordering(670) 00:17:17.265 fused_ordering(671) 00:17:17.265 fused_ordering(672) 00:17:17.265 fused_ordering(673) 00:17:17.265 fused_ordering(674) 00:17:17.265 fused_ordering(675) 00:17:17.265 fused_ordering(676) 00:17:17.265 fused_ordering(677) 00:17:17.265 fused_ordering(678) 00:17:17.265 fused_ordering(679) 00:17:17.265 fused_ordering(680) 00:17:17.265 fused_ordering(681) 00:17:17.265 fused_ordering(682) 00:17:17.265 fused_ordering(683) 00:17:17.265 fused_ordering(684) 00:17:17.265 fused_ordering(685) 00:17:17.265 fused_ordering(686) 00:17:17.265 fused_ordering(687) 00:17:17.265 fused_ordering(688) 00:17:17.265 fused_ordering(689) 00:17:17.265 fused_ordering(690) 00:17:17.265 fused_ordering(691) 00:17:17.265 fused_ordering(692) 00:17:17.265 fused_ordering(693) 00:17:17.265 fused_ordering(694) 00:17:17.265 fused_ordering(695) 00:17:17.265 fused_ordering(696) 00:17:17.265 fused_ordering(697) 00:17:17.265 fused_ordering(698) 00:17:17.265 fused_ordering(699) 00:17:17.265 fused_ordering(700) 00:17:17.265 fused_ordering(701) 00:17:17.265 fused_ordering(702) 00:17:17.265 fused_ordering(703) 00:17:17.265 fused_ordering(704) 00:17:17.265 fused_ordering(705) 00:17:17.265 fused_ordering(706) 00:17:17.265 fused_ordering(707) 00:17:17.265 fused_ordering(708) 00:17:17.265 fused_ordering(709) 00:17:17.265 fused_ordering(710) 00:17:17.265 fused_ordering(711) 00:17:17.265 fused_ordering(712) 00:17:17.265 fused_ordering(713) 00:17:17.265 fused_ordering(714) 00:17:17.265 fused_ordering(715) 00:17:17.265 fused_ordering(716) 00:17:17.265 fused_ordering(717) 00:17:17.265 fused_ordering(718) 00:17:17.265 fused_ordering(719) 00:17:17.265 fused_ordering(720) 00:17:17.265 fused_ordering(721) 00:17:17.265 fused_ordering(722) 00:17:17.265 fused_ordering(723) 00:17:17.265 fused_ordering(724) 00:17:17.265 fused_ordering(725) 00:17:17.265 fused_ordering(726) 00:17:17.265 fused_ordering(727) 00:17:17.265 fused_ordering(728) 00:17:17.265 fused_ordering(729) 00:17:17.265 fused_ordering(730) 00:17:17.265 fused_ordering(731) 00:17:17.265 fused_ordering(732) 00:17:17.265 fused_ordering(733) 00:17:17.265 fused_ordering(734) 00:17:17.265 fused_ordering(735) 00:17:17.265 fused_ordering(736) 00:17:17.265 fused_ordering(737) 00:17:17.265 fused_ordering(738) 00:17:17.265 fused_ordering(739) 00:17:17.265 fused_ordering(740) 00:17:17.265 fused_ordering(741) 00:17:17.265 fused_ordering(742) 00:17:17.265 fused_ordering(743) 00:17:17.265 fused_ordering(744) 00:17:17.265 fused_ordering(745) 00:17:17.265 fused_ordering(746) 00:17:17.265 fused_ordering(747) 00:17:17.265 fused_ordering(748) 00:17:17.265 fused_ordering(749) 00:17:17.265 fused_ordering(750) 00:17:17.265 fused_ordering(751) 00:17:17.265 fused_ordering(752) 00:17:17.265 fused_ordering(753) 00:17:17.265 fused_ordering(754) 00:17:17.265 fused_ordering(755) 00:17:17.265 fused_ordering(756) 00:17:17.265 fused_ordering(757) 00:17:17.265 fused_ordering(758) 00:17:17.265 fused_ordering(759) 00:17:17.265 fused_ordering(760) 00:17:17.265 fused_ordering(761) 00:17:17.265 fused_ordering(762) 00:17:17.265 fused_ordering(763) 00:17:17.265 fused_ordering(764) 00:17:17.265 fused_ordering(765) 00:17:17.265 fused_ordering(766) 00:17:17.265 fused_ordering(767) 00:17:17.265 fused_ordering(768) 00:17:17.265 fused_ordering(769) 00:17:17.265 fused_ordering(770) 00:17:17.265 fused_ordering(771) 00:17:17.265 fused_ordering(772) 00:17:17.265 fused_ordering(773) 00:17:17.265 fused_ordering(774) 00:17:17.265 fused_ordering(775) 00:17:17.265 fused_ordering(776) 00:17:17.265 fused_ordering(777) 00:17:17.265 fused_ordering(778) 00:17:17.265 fused_ordering(779) 00:17:17.265 fused_ordering(780) 00:17:17.265 fused_ordering(781) 00:17:17.265 fused_ordering(782) 00:17:17.265 fused_ordering(783) 00:17:17.265 fused_ordering(784) 00:17:17.265 fused_ordering(785) 00:17:17.265 fused_ordering(786) 00:17:17.265 fused_ordering(787) 00:17:17.265 fused_ordering(788) 00:17:17.265 fused_ordering(789) 00:17:17.265 fused_ordering(790) 00:17:17.265 fused_ordering(791) 00:17:17.265 fused_ordering(792) 00:17:17.265 fused_ordering(793) 00:17:17.265 fused_ordering(794) 00:17:17.265 fused_ordering(795) 00:17:17.265 fused_ordering(796) 00:17:17.265 fused_ordering(797) 00:17:17.265 fused_ordering(798) 00:17:17.265 fused_ordering(799) 00:17:17.265 fused_ordering(800) 00:17:17.265 fused_ordering(801) 00:17:17.265 fused_ordering(802) 00:17:17.265 fused_ordering(803) 00:17:17.265 fused_ordering(804) 00:17:17.265 fused_ordering(805) 00:17:17.265 fused_ordering(806) 00:17:17.265 fused_ordering(807) 00:17:17.265 fused_ordering(808) 00:17:17.265 fused_ordering(809) 00:17:17.265 fused_ordering(810) 00:17:17.265 fused_ordering(811) 00:17:17.265 fused_ordering(812) 00:17:17.265 fused_ordering(813) 00:17:17.265 fused_ordering(814) 00:17:17.265 fused_ordering(815) 00:17:17.265 fused_ordering(816) 00:17:17.265 fused_ordering(817) 00:17:17.265 fused_ordering(818) 00:17:17.265 fused_ordering(819) 00:17:17.265 fused_ordering(820) 00:17:17.525 fused_ordering(821) 00:17:17.525 fused_ordering(822) 00:17:17.525 fused_ordering(823) 00:17:17.525 fused_ordering(824) 00:17:17.525 fused_ordering(825) 00:17:17.525 fused_ordering(826) 00:17:17.525 fused_ordering(827) 00:17:17.525 fused_ordering(828) 00:17:17.525 fused_ordering(829) 00:17:17.525 fused_ordering(830) 00:17:17.525 fused_ordering(831) 00:17:17.525 fused_ordering(832) 00:17:17.525 fused_ordering(833) 00:17:17.525 fused_ordering(834) 00:17:17.525 fused_ordering(835) 00:17:17.525 fused_ordering(836) 00:17:17.525 fused_ordering(837) 00:17:17.525 fused_ordering(838) 00:17:17.525 fused_ordering(839) 00:17:17.525 fused_ordering(840) 00:17:17.525 fused_ordering(841) 00:17:17.525 fused_ordering(842) 00:17:17.525 fused_ordering(843) 00:17:17.525 fused_ordering(844) 00:17:17.525 fused_ordering(845) 00:17:17.525 fused_ordering(846) 00:17:17.525 fused_ordering(847) 00:17:17.525 fused_ordering(848) 00:17:17.525 fused_ordering(849) 00:17:17.525 fused_ordering(850) 00:17:17.525 fused_ordering(851) 00:17:17.525 fused_ordering(852) 00:17:17.525 fused_ordering(853) 00:17:17.525 fused_ordering(854) 00:17:17.525 fused_ordering(855) 00:17:17.525 fused_ordering(856) 00:17:17.525 fused_ordering(857) 00:17:17.525 fused_ordering(858) 00:17:17.525 fused_ordering(859) 00:17:17.525 fused_ordering(860) 00:17:17.525 fused_ordering(861) 00:17:17.525 fused_ordering(862) 00:17:17.525 fused_ordering(863) 00:17:17.525 fused_ordering(864) 00:17:17.525 fused_ordering(865) 00:17:17.525 fused_ordering(866) 00:17:17.525 fused_ordering(867) 00:17:17.525 fused_ordering(868) 00:17:17.525 fused_ordering(869) 00:17:17.525 fused_ordering(870) 00:17:17.525 fused_ordering(871) 00:17:17.525 fused_ordering(872) 00:17:17.525 fused_ordering(873) 00:17:17.525 fused_ordering(874) 00:17:17.525 fused_ordering(875) 00:17:17.525 fused_ordering(876) 00:17:17.525 fused_ordering(877) 00:17:17.525 fused_ordering(878) 00:17:17.525 fused_ordering(879) 00:17:17.525 fused_ordering(880) 00:17:17.525 fused_ordering(881) 00:17:17.525 fused_ordering(882) 00:17:17.525 fused_ordering(883) 00:17:17.525 fused_ordering(884) 00:17:17.525 fused_ordering(885) 00:17:17.525 fused_ordering(886) 00:17:17.525 fused_ordering(887) 00:17:17.525 fused_ordering(888) 00:17:17.525 fused_ordering(889) 00:17:17.525 fused_ordering(890) 00:17:17.525 fused_ordering(891) 00:17:17.525 fused_ordering(892) 00:17:17.525 fused_ordering(893) 00:17:17.525 fused_ordering(894) 00:17:17.525 fused_ordering(895) 00:17:17.525 fused_ordering(896) 00:17:17.525 fused_ordering(897) 00:17:17.525 fused_ordering(898) 00:17:17.525 fused_ordering(899) 00:17:17.525 fused_ordering(900) 00:17:17.525 fused_ordering(901) 00:17:17.525 fused_ordering(902) 00:17:17.525 fused_ordering(903) 00:17:17.525 fused_ordering(904) 00:17:17.525 fused_ordering(905) 00:17:17.525 fused_ordering(906) 00:17:17.525 fused_ordering(907) 00:17:17.525 fused_ordering(908) 00:17:17.525 fused_ordering(909) 00:17:17.525 fused_ordering(910) 00:17:17.525 fused_ordering(911) 00:17:17.525 fused_ordering(912) 00:17:17.525 fused_ordering(913) 00:17:17.525 fused_ordering(914) 00:17:17.525 fused_ordering(915) 00:17:17.525 fused_ordering(916) 00:17:17.525 fused_ordering(917) 00:17:17.525 fused_ordering(918) 00:17:17.525 fused_ordering(919) 00:17:17.525 fused_ordering(920) 00:17:17.525 fused_ordering(921) 00:17:17.525 fused_ordering(922) 00:17:17.525 fused_ordering(923) 00:17:17.525 fused_ordering(924) 00:17:17.525 fused_ordering(925) 00:17:17.525 fused_ordering(926) 00:17:17.525 fused_ordering(927) 00:17:17.525 fused_ordering(928) 00:17:17.525 fused_ordering(929) 00:17:17.525 fused_ordering(930) 00:17:17.525 fused_ordering(931) 00:17:17.525 fused_ordering(932) 00:17:17.525 fused_ordering(933) 00:17:17.525 fused_ordering(934) 00:17:17.525 fused_ordering(935) 00:17:17.525 fused_ordering(936) 00:17:17.525 fused_ordering(937) 00:17:17.525 fused_ordering(938) 00:17:17.525 fused_ordering(939) 00:17:17.525 fused_ordering(940) 00:17:17.525 fused_ordering(941) 00:17:17.525 fused_ordering(942) 00:17:17.525 fused_ordering(943) 00:17:17.525 fused_ordering(944) 00:17:17.525 fused_ordering(945) 00:17:17.525 fused_ordering(946) 00:17:17.525 fused_ordering(947) 00:17:17.525 fused_ordering(948) 00:17:17.525 fused_ordering(949) 00:17:17.525 fused_ordering(950) 00:17:17.525 fused_ordering(951) 00:17:17.525 fused_ordering(952) 00:17:17.525 fused_ordering(953) 00:17:17.525 fused_ordering(954) 00:17:17.525 fused_ordering(955) 00:17:17.525 fused_ordering(956) 00:17:17.525 fused_ordering(957) 00:17:17.525 fused_ordering(958) 00:17:17.525 fused_ordering(959) 00:17:17.525 fused_ordering(960) 00:17:17.525 fused_ordering(961) 00:17:17.525 fused_ordering(962) 00:17:17.525 fused_ordering(963) 00:17:17.525 fused_ordering(964) 00:17:17.525 fused_ordering(965) 00:17:17.525 fused_ordering(966) 00:17:17.525 fused_ordering(967) 00:17:17.525 fused_ordering(968) 00:17:17.526 fused_ordering(969) 00:17:17.526 fused_ordering(970) 00:17:17.526 fused_ordering(971) 00:17:17.526 fused_ordering(972) 00:17:17.526 fused_ordering(973) 00:17:17.526 fused_ordering(974) 00:17:17.526 fused_ordering(975) 00:17:17.526 fused_ordering(976) 00:17:17.526 fused_ordering(977) 00:17:17.526 fused_ordering(978) 00:17:17.526 fused_ordering(979) 00:17:17.526 fused_ordering(980) 00:17:17.526 fused_ordering(981) 00:17:17.526 fused_ordering(982) 00:17:17.526 fused_ordering(983) 00:17:17.526 fused_ordering(984) 00:17:17.526 fused_ordering(985) 00:17:17.526 fused_ordering(986) 00:17:17.526 fused_ordering(987) 00:17:17.526 fused_ordering(988) 00:17:17.526 fused_ordering(989) 00:17:17.526 fused_ordering(990) 00:17:17.526 fused_ordering(991) 00:17:17.526 fused_ordering(992) 00:17:17.526 fused_ordering(993) 00:17:17.526 fused_ordering(994) 00:17:17.526 fused_ordering(995) 00:17:17.526 fused_ordering(996) 00:17:17.526 fused_ordering(997) 00:17:17.526 fused_ordering(998) 00:17:17.526 fused_ordering(999) 00:17:17.526 fused_ordering(1000) 00:17:17.526 fused_ordering(1001) 00:17:17.526 fused_ordering(1002) 00:17:17.526 fused_ordering(1003) 00:17:17.526 fused_ordering(1004) 00:17:17.526 fused_ordering(1005) 00:17:17.526 fused_ordering(1006) 00:17:17.526 fused_ordering(1007) 00:17:17.526 fused_ordering(1008) 00:17:17.526 fused_ordering(1009) 00:17:17.526 fused_ordering(1010) 00:17:17.526 fused_ordering(1011) 00:17:17.526 fused_ordering(1012) 00:17:17.526 fused_ordering(1013) 00:17:17.526 fused_ordering(1014) 00:17:17.526 fused_ordering(1015) 00:17:17.526 fused_ordering(1016) 00:17:17.526 fused_ordering(1017) 00:17:17.526 fused_ordering(1018) 00:17:17.526 fused_ordering(1019) 00:17:17.526 fused_ordering(1020) 00:17:17.526 fused_ordering(1021) 00:17:17.526 fused_ordering(1022) 00:17:17.526 fused_ordering(1023) 00:17:17.526 16:07:48 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:17.526 16:07:48 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:17.526 16:07:48 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:17.526 16:07:48 -- nvmf/common.sh@116 -- # sync 00:17:17.526 16:07:48 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:17.526 16:07:48 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:17.526 16:07:48 -- nvmf/common.sh@119 -- # set +e 00:17:17.526 16:07:48 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:17.526 16:07:48 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:17.526 rmmod nvme_rdma 00:17:17.526 rmmod nvme_fabrics 00:17:17.526 16:07:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:17.526 16:07:48 -- nvmf/common.sh@123 -- # set -e 00:17:17.526 16:07:48 -- nvmf/common.sh@124 -- # return 0 00:17:17.526 16:07:48 -- nvmf/common.sh@477 -- # '[' -n 1326469 ']' 00:17:17.526 16:07:48 -- nvmf/common.sh@478 -- # killprocess 1326469 00:17:17.526 16:07:48 -- common/autotest_common.sh@936 -- # '[' -z 1326469 ']' 00:17:17.526 16:07:48 -- common/autotest_common.sh@940 -- # kill -0 1326469 00:17:17.526 16:07:48 -- common/autotest_common.sh@941 -- # uname 00:17:17.526 16:07:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.526 16:07:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1326469 00:17:17.526 16:07:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:17:17.526 16:07:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:17:17.526 16:07:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1326469' 00:17:17.526 killing process with pid 1326469 00:17:17.526 16:07:48 -- common/autotest_common.sh@955 -- # kill 1326469 00:17:17.526 16:07:48 -- common/autotest_common.sh@960 -- # wait 1326469 00:17:17.785 16:07:48 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:17.785 16:07:48 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:17.785 00:17:17.785 real 0m8.852s 00:17:17.785 user 0m4.692s 00:17:17.785 sys 0m5.509s 00:17:17.785 16:07:48 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:17.785 16:07:48 -- common/autotest_common.sh@10 -- # set +x 00:17:17.785 ************************************ 00:17:17.785 END TEST nvmf_fused_ordering 00:17:17.785 ************************************ 00:17:17.785 16:07:48 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:17.785 16:07:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:17.785 16:07:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.785 16:07:48 -- common/autotest_common.sh@10 -- # set +x 00:17:17.785 ************************************ 00:17:17.785 START TEST nvmf_delete_subsystem 00:17:17.785 ************************************ 00:17:17.785 16:07:48 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=rdma 00:17:18.045 * Looking for test storage... 00:17:18.045 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:18.045 16:07:48 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:18.045 16:07:48 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:18.045 16:07:48 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:18.045 16:07:48 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:18.045 16:07:48 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:18.045 16:07:48 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:18.045 16:07:48 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:18.045 16:07:48 -- scripts/common.sh@335 -- # IFS=.-: 00:17:18.045 16:07:48 -- scripts/common.sh@335 -- # read -ra ver1 00:17:18.045 16:07:48 -- scripts/common.sh@336 -- # IFS=.-: 00:17:18.045 16:07:48 -- scripts/common.sh@336 -- # read -ra ver2 00:17:18.045 16:07:48 -- scripts/common.sh@337 -- # local 'op=<' 00:17:18.045 16:07:48 -- scripts/common.sh@339 -- # ver1_l=2 00:17:18.045 16:07:48 -- scripts/common.sh@340 -- # ver2_l=1 00:17:18.045 16:07:48 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:18.045 16:07:48 -- scripts/common.sh@343 -- # case "$op" in 00:17:18.045 16:07:48 -- scripts/common.sh@344 -- # : 1 00:17:18.045 16:07:48 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:18.045 16:07:48 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:18.045 16:07:48 -- scripts/common.sh@364 -- # decimal 1 00:17:18.045 16:07:48 -- scripts/common.sh@352 -- # local d=1 00:17:18.045 16:07:48 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:18.045 16:07:48 -- scripts/common.sh@354 -- # echo 1 00:17:18.045 16:07:48 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:18.045 16:07:48 -- scripts/common.sh@365 -- # decimal 2 00:17:18.045 16:07:48 -- scripts/common.sh@352 -- # local d=2 00:17:18.045 16:07:48 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:18.045 16:07:48 -- scripts/common.sh@354 -- # echo 2 00:17:18.045 16:07:48 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:18.045 16:07:48 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:18.045 16:07:48 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:18.045 16:07:48 -- scripts/common.sh@367 -- # return 0 00:17:18.045 16:07:48 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:18.045 16:07:48 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.045 --rc genhtml_branch_coverage=1 00:17:18.045 --rc genhtml_function_coverage=1 00:17:18.045 --rc genhtml_legend=1 00:17:18.045 --rc geninfo_all_blocks=1 00:17:18.045 --rc geninfo_unexecuted_blocks=1 00:17:18.045 00:17:18.045 ' 00:17:18.045 16:07:48 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.045 --rc genhtml_branch_coverage=1 00:17:18.045 --rc genhtml_function_coverage=1 00:17:18.045 --rc genhtml_legend=1 00:17:18.045 --rc geninfo_all_blocks=1 00:17:18.045 --rc geninfo_unexecuted_blocks=1 00:17:18.045 00:17:18.045 ' 00:17:18.045 16:07:48 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.045 --rc genhtml_branch_coverage=1 00:17:18.045 --rc genhtml_function_coverage=1 00:17:18.045 --rc genhtml_legend=1 00:17:18.045 --rc geninfo_all_blocks=1 00:17:18.045 --rc geninfo_unexecuted_blocks=1 00:17:18.045 00:17:18.045 ' 00:17:18.045 16:07:48 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:18.045 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:18.045 --rc genhtml_branch_coverage=1 00:17:18.045 --rc genhtml_function_coverage=1 00:17:18.045 --rc genhtml_legend=1 00:17:18.045 --rc geninfo_all_blocks=1 00:17:18.045 --rc geninfo_unexecuted_blocks=1 00:17:18.045 00:17:18.045 ' 00:17:18.045 16:07:48 -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:18.045 16:07:48 -- nvmf/common.sh@7 -- # uname -s 00:17:18.045 16:07:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:18.045 16:07:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:18.045 16:07:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:18.045 16:07:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:18.045 16:07:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:18.045 16:07:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:18.045 16:07:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:18.045 16:07:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:18.045 16:07:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:18.045 16:07:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:18.045 16:07:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:18.045 16:07:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:18.045 16:07:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:18.045 16:07:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:18.045 16:07:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:18.045 16:07:48 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:18.045 16:07:48 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:18.045 16:07:48 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:18.045 16:07:48 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:18.045 16:07:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.045 16:07:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.045 16:07:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.045 16:07:48 -- paths/export.sh@5 -- # export PATH 00:17:18.046 16:07:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:18.046 16:07:48 -- nvmf/common.sh@46 -- # : 0 00:17:18.046 16:07:48 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:18.046 16:07:48 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:18.046 16:07:48 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:18.046 16:07:48 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:18.046 16:07:48 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:18.046 16:07:48 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:18.046 16:07:48 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:18.046 16:07:48 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:18.046 16:07:48 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:17:18.046 16:07:48 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:18.046 16:07:48 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:18.046 16:07:48 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:18.046 16:07:48 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:18.046 16:07:48 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:18.046 16:07:48 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:18.046 16:07:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:18.046 16:07:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:18.046 16:07:48 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:18.046 16:07:48 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:18.046 16:07:48 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:18.046 16:07:48 -- common/autotest_common.sh@10 -- # set +x 00:17:24.624 16:07:55 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:24.624 16:07:55 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:24.624 16:07:55 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:24.624 16:07:55 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:24.624 16:07:55 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:24.624 16:07:55 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:24.624 16:07:55 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:24.624 16:07:55 -- nvmf/common.sh@294 -- # net_devs=() 00:17:24.624 16:07:55 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:24.624 16:07:55 -- nvmf/common.sh@295 -- # e810=() 00:17:24.624 16:07:55 -- nvmf/common.sh@295 -- # local -ga e810 00:17:24.624 16:07:55 -- nvmf/common.sh@296 -- # x722=() 00:17:24.624 16:07:55 -- nvmf/common.sh@296 -- # local -ga x722 00:17:24.624 16:07:55 -- nvmf/common.sh@297 -- # mlx=() 00:17:24.624 16:07:55 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:24.624 16:07:55 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:24.624 16:07:55 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:24.624 16:07:55 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:24.624 16:07:55 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:24.624 16:07:55 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:24.624 16:07:55 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:24.624 16:07:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:24.624 16:07:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:24.624 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:24.624 16:07:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.624 16:07:55 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:24.624 16:07:55 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:24.624 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:24.624 16:07:55 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:24.624 16:07:55 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:24.624 16:07:55 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:24.624 16:07:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.624 16:07:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:24.624 16:07:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.624 16:07:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:24.624 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:24.624 16:07:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.624 16:07:55 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:24.624 16:07:55 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:24.624 16:07:55 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:24.624 16:07:55 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:24.624 16:07:55 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:24.624 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:24.624 16:07:55 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:24.624 16:07:55 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:24.624 16:07:55 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:24.624 16:07:55 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:24.624 16:07:55 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:24.624 16:07:55 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:24.624 16:07:55 -- nvmf/common.sh@57 -- # uname 00:17:24.624 16:07:55 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:24.624 16:07:55 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:24.624 16:07:55 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:24.624 16:07:55 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:24.624 16:07:55 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:24.624 16:07:55 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:24.624 16:07:55 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:24.624 16:07:55 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:24.624 16:07:55 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:24.624 16:07:55 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:24.884 16:07:55 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:24.884 16:07:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.884 16:07:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:24.884 16:07:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:24.884 16:07:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.884 16:07:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:24.884 16:07:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:24.884 16:07:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.884 16:07:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.884 16:07:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:24.884 16:07:55 -- nvmf/common.sh@104 -- # continue 2 00:17:24.884 16:07:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:24.884 16:07:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.884 16:07:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.884 16:07:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.884 16:07:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.884 16:07:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:24.884 16:07:55 -- nvmf/common.sh@104 -- # continue 2 00:17:24.884 16:07:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:24.884 16:07:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:24.884 16:07:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:24.884 16:07:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:24.884 16:07:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:24.884 16:07:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:24.884 16:07:55 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:24.884 16:07:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:24.884 16:07:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:24.884 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.884 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:24.884 altname enp217s0f0np0 00:17:24.884 altname ens818f0np0 00:17:24.884 inet 192.168.100.8/24 scope global mlx_0_0 00:17:24.884 valid_lft forever preferred_lft forever 00:17:24.884 16:07:55 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:24.884 16:07:55 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:24.884 16:07:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:24.884 16:07:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:24.884 16:07:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:24.884 16:07:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:24.884 16:07:55 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:24.884 16:07:55 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:24.884 16:07:55 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:24.884 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:24.884 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:24.884 altname enp217s0f1np1 00:17:24.884 altname ens818f1np1 00:17:24.884 inet 192.168.100.9/24 scope global mlx_0_1 00:17:24.884 valid_lft forever preferred_lft forever 00:17:24.884 16:07:55 -- nvmf/common.sh@410 -- # return 0 00:17:24.884 16:07:55 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:24.884 16:07:55 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:24.884 16:07:55 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:24.884 16:07:55 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:24.884 16:07:55 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:24.884 16:07:55 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:24.884 16:07:55 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:24.884 16:07:55 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:24.884 16:07:55 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:24.884 16:07:55 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:24.884 16:07:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:24.884 16:07:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.884 16:07:55 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:24.884 16:07:55 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:24.884 16:07:55 -- nvmf/common.sh@104 -- # continue 2 00:17:24.884 16:07:55 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:24.885 16:07:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.885 16:07:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:24.885 16:07:55 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:24.885 16:07:55 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:24.885 16:07:55 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:24.885 16:07:55 -- nvmf/common.sh@104 -- # continue 2 00:17:24.885 16:07:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:24.885 16:07:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:24.885 16:07:55 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:24.885 16:07:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:24.885 16:07:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:24.885 16:07:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:24.885 16:07:55 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:24.885 16:07:55 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:24.885 16:07:55 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:24.885 16:07:55 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:24.885 16:07:55 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:24.885 16:07:55 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:24.885 16:07:55 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:24.885 192.168.100.9' 00:17:24.885 16:07:55 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:24.885 192.168.100.9' 00:17:24.885 16:07:55 -- nvmf/common.sh@445 -- # head -n 1 00:17:24.885 16:07:55 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:24.885 16:07:55 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:24.885 192.168.100.9' 00:17:24.885 16:07:55 -- nvmf/common.sh@446 -- # head -n 1 00:17:24.885 16:07:55 -- nvmf/common.sh@446 -- # tail -n +2 00:17:24.885 16:07:55 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:24.885 16:07:55 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:24.885 16:07:55 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:24.885 16:07:55 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:24.885 16:07:55 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:24.885 16:07:55 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:24.885 16:07:55 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:17:24.885 16:07:55 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:24.885 16:07:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:24.885 16:07:55 -- common/autotest_common.sh@10 -- # set +x 00:17:24.885 16:07:55 -- nvmf/common.sh@469 -- # nvmfpid=1330199 00:17:24.885 16:07:55 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:17:24.885 16:07:55 -- nvmf/common.sh@470 -- # waitforlisten 1330199 00:17:24.885 16:07:55 -- common/autotest_common.sh@829 -- # '[' -z 1330199 ']' 00:17:24.885 16:07:55 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:24.885 16:07:55 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:24.885 16:07:55 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:24.885 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:24.885 16:07:55 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:24.885 16:07:55 -- common/autotest_common.sh@10 -- # set +x 00:17:24.885 [2024-11-20 16:07:55.667042] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:24.885 [2024-11-20 16:07:55.667093] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:25.144 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.144 [2024-11-20 16:07:55.737778] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:25.144 [2024-11-20 16:07:55.775444] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:25.144 [2024-11-20 16:07:55.775563] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:25.144 [2024-11-20 16:07:55.775574] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:25.144 [2024-11-20 16:07:55.775583] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:25.144 [2024-11-20 16:07:55.779538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:25.144 [2024-11-20 16:07:55.779542] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:25.711 16:07:56 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:25.711 16:07:56 -- common/autotest_common.sh@862 -- # return 0 00:17:25.711 16:07:56 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:25.711 16:07:56 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:25.711 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:17:25.969 16:07:56 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:25.970 16:07:56 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:25.970 16:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 [2024-11-20 16:07:56.551954] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x14c0b50/0x14c5000) succeed. 00:17:25.970 [2024-11-20 16:07:56.561049] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x14c2000/0x15066a0) succeed. 00:17:25.970 16:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 16:07:56 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:25.970 16:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 16:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 16:07:56 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:25.970 16:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 [2024-11-20 16:07:56.645108] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:25.970 16:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 16:07:56 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:25.970 16:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 NULL1 00:17:25.970 16:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 16:07:56 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:17:25.970 16:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 Delay0 00:17:25.970 16:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 16:07:56 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:25.970 16:07:56 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.970 16:07:56 -- common/autotest_common.sh@10 -- # set +x 00:17:25.970 16:07:56 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.970 16:07:56 -- target/delete_subsystem.sh@28 -- # perf_pid=1330483 00:17:25.970 16:07:56 -- target/delete_subsystem.sh@30 -- # sleep 2 00:17:25.970 16:07:56 -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:25.970 EAL: No free 2048 kB hugepages reported on node 1 00:17:25.970 [2024-11-20 16:07:56.758103] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:28.499 16:07:58 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:28.499 16:07:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.499 16:07:58 -- common/autotest_common.sh@10 -- # set +x 00:17:29.068 NVMe io qpair process completion error 00:17:29.068 NVMe io qpair process completion error 00:17:29.068 NVMe io qpair process completion error 00:17:29.068 NVMe io qpair process completion error 00:17:29.068 NVMe io qpair process completion error 00:17:29.068 NVMe io qpair process completion error 00:17:29.068 16:07:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.068 16:07:59 -- target/delete_subsystem.sh@34 -- # delay=0 00:17:29.068 16:07:59 -- target/delete_subsystem.sh@35 -- # kill -0 1330483 00:17:29.068 16:07:59 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:29.634 16:08:00 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:29.634 16:08:00 -- target/delete_subsystem.sh@35 -- # kill -0 1330483 00:17:29.634 16:08:00 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:30.202 Write completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Write completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Write completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.202 starting I/O failed: -6 00:17:30.202 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 starting I/O failed: -6 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Write completed with error (sct=0, sc=8) 00:17:30.203 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Write completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 Read completed with error (sct=0, sc=8) 00:17:30.204 16:08:00 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:30.204 16:08:00 -- target/delete_subsystem.sh@35 -- # kill -0 1330483 00:17:30.204 16:08:00 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:17:30.204 [2024-11-20 16:08:00.856641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:17:30.204 [2024-11-20 16:08:00.856689] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:30.204 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:17:30.204 Initializing NVMe Controllers 00:17:30.204 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:30.204 Controller IO queue size 128, less than required. 00:17:30.204 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:30.204 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:30.204 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:30.204 Initialization complete. Launching workers. 00:17:30.204 ======================================================== 00:17:30.204 Latency(us) 00:17:30.204 Device Information : IOPS MiB/s Average min max 00:17:30.204 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 80.38 0.04 1595397.62 1000192.98 2981692.59 00:17:30.204 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 80.38 0.04 1596926.04 1001029.31 2983151.25 00:17:30.204 ======================================================== 00:17:30.204 Total : 160.75 0.08 1596161.83 1000192.98 2983151.25 00:17:30.204 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@35 -- # kill -0 1330483 00:17:30.771 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1330483) - No such process 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@45 -- # NOT wait 1330483 00:17:30.771 16:08:01 -- common/autotest_common.sh@650 -- # local es=0 00:17:30.771 16:08:01 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1330483 00:17:30.771 16:08:01 -- common/autotest_common.sh@638 -- # local arg=wait 00:17:30.771 16:08:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.771 16:08:01 -- common/autotest_common.sh@642 -- # type -t wait 00:17:30.771 16:08:01 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:30.771 16:08:01 -- common/autotest_common.sh@653 -- # wait 1330483 00:17:30.771 16:08:01 -- common/autotest_common.sh@653 -- # es=1 00:17:30.771 16:08:01 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:30.771 16:08:01 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:30.771 16:08:01 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:30.771 16:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.771 16:08:01 -- common/autotest_common.sh@10 -- # set +x 00:17:30.771 16:08:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:30.771 16:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.771 16:08:01 -- common/autotest_common.sh@10 -- # set +x 00:17:30.771 [2024-11-20 16:08:01.375425] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:30.771 16:08:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:17:30.771 16:08:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.771 16:08:01 -- common/autotest_common.sh@10 -- # set +x 00:17:30.771 16:08:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@54 -- # perf_pid=1331298 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@56 -- # delay=0 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:30.771 16:08:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:30.771 EAL: No free 2048 kB hugepages reported on node 1 00:17:30.771 [2024-11-20 16:08:01.461451] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:17:31.336 16:08:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:31.336 16:08:01 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:31.336 16:08:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:31.904 16:08:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:31.904 16:08:02 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:31.904 16:08:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:32.162 16:08:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:32.163 16:08:02 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:32.163 16:08:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:32.731 16:08:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:32.731 16:08:03 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:32.731 16:08:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:33.298 16:08:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:33.298 16:08:03 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:33.298 16:08:03 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:33.867 16:08:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:33.867 16:08:04 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:33.867 16:08:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:34.127 16:08:04 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:34.127 16:08:04 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:34.127 16:08:04 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:34.694 16:08:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:34.694 16:08:05 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:34.694 16:08:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:35.262 16:08:05 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:35.262 16:08:05 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:35.262 16:08:05 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:35.829 16:08:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:35.829 16:08:06 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:35.829 16:08:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:36.398 16:08:06 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:36.398 16:08:06 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:36.398 16:08:06 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:36.657 16:08:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:36.657 16:08:07 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:36.657 16:08:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:37.225 16:08:07 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:37.225 16:08:07 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:37.225 16:08:07 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:37.792 16:08:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:37.792 16:08:08 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:37.792 16:08:08 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:17:38.051 Initializing NVMe Controllers 00:17:38.051 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:17:38.051 Controller IO queue size 128, less than required. 00:17:38.051 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:17:38.051 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:17:38.051 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:17:38.051 Initialization complete. Launching workers. 00:17:38.051 ======================================================== 00:17:38.051 Latency(us) 00:17:38.051 Device Information : IOPS MiB/s Average min max 00:17:38.051 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001164.66 1000053.13 1003941.81 00:17:38.051 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002306.88 1000065.73 1005676.23 00:17:38.051 ======================================================== 00:17:38.051 Total : 256.00 0.12 1001735.77 1000053.13 1005676.23 00:17:38.051 00:17:38.310 16:08:08 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:17:38.310 16:08:08 -- target/delete_subsystem.sh@57 -- # kill -0 1331298 00:17:38.310 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1331298) - No such process 00:17:38.310 16:08:08 -- target/delete_subsystem.sh@67 -- # wait 1331298 00:17:38.310 16:08:08 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:38.310 16:08:08 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:17:38.310 16:08:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:38.310 16:08:08 -- nvmf/common.sh@116 -- # sync 00:17:38.310 16:08:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:38.310 16:08:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:38.310 16:08:08 -- nvmf/common.sh@119 -- # set +e 00:17:38.310 16:08:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:38.310 16:08:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:38.310 rmmod nvme_rdma 00:17:38.310 rmmod nvme_fabrics 00:17:38.310 16:08:09 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:38.310 16:08:09 -- nvmf/common.sh@123 -- # set -e 00:17:38.310 16:08:09 -- nvmf/common.sh@124 -- # return 0 00:17:38.310 16:08:09 -- nvmf/common.sh@477 -- # '[' -n 1330199 ']' 00:17:38.310 16:08:09 -- nvmf/common.sh@478 -- # killprocess 1330199 00:17:38.310 16:08:09 -- common/autotest_common.sh@936 -- # '[' -z 1330199 ']' 00:17:38.310 16:08:09 -- common/autotest_common.sh@940 -- # kill -0 1330199 00:17:38.310 16:08:09 -- common/autotest_common.sh@941 -- # uname 00:17:38.310 16:08:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:38.310 16:08:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1330199 00:17:38.310 16:08:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:38.311 16:08:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:38.311 16:08:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1330199' 00:17:38.311 killing process with pid 1330199 00:17:38.311 16:08:09 -- common/autotest_common.sh@955 -- # kill 1330199 00:17:38.311 16:08:09 -- common/autotest_common.sh@960 -- # wait 1330199 00:17:38.569 16:08:09 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:38.569 16:08:09 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:38.569 00:17:38.569 real 0m20.799s 00:17:38.569 user 0m50.194s 00:17:38.569 sys 0m6.599s 00:17:38.569 16:08:09 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:38.569 16:08:09 -- common/autotest_common.sh@10 -- # set +x 00:17:38.569 ************************************ 00:17:38.569 END TEST nvmf_delete_subsystem 00:17:38.569 ************************************ 00:17:38.569 16:08:09 -- nvmf/nvmf.sh@36 -- # [[ 1 -eq 1 ]] 00:17:38.569 16:08:09 -- nvmf/nvmf.sh@37 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:38.569 16:08:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:38.569 16:08:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:38.569 16:08:09 -- common/autotest_common.sh@10 -- # set +x 00:17:38.569 ************************************ 00:17:38.569 START TEST nvmf_nvme_cli 00:17:38.569 ************************************ 00:17:38.569 16:08:09 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=rdma 00:17:38.828 * Looking for test storage... 00:17:38.828 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:38.828 16:08:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:38.828 16:08:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:38.828 16:08:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:38.828 16:08:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:38.828 16:08:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:38.828 16:08:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:38.828 16:08:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:38.828 16:08:09 -- scripts/common.sh@335 -- # IFS=.-: 00:17:38.828 16:08:09 -- scripts/common.sh@335 -- # read -ra ver1 00:17:38.828 16:08:09 -- scripts/common.sh@336 -- # IFS=.-: 00:17:38.828 16:08:09 -- scripts/common.sh@336 -- # read -ra ver2 00:17:38.828 16:08:09 -- scripts/common.sh@337 -- # local 'op=<' 00:17:38.828 16:08:09 -- scripts/common.sh@339 -- # ver1_l=2 00:17:38.828 16:08:09 -- scripts/common.sh@340 -- # ver2_l=1 00:17:38.828 16:08:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:38.828 16:08:09 -- scripts/common.sh@343 -- # case "$op" in 00:17:38.828 16:08:09 -- scripts/common.sh@344 -- # : 1 00:17:38.828 16:08:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:38.828 16:08:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:38.828 16:08:09 -- scripts/common.sh@364 -- # decimal 1 00:17:38.828 16:08:09 -- scripts/common.sh@352 -- # local d=1 00:17:38.828 16:08:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:38.828 16:08:09 -- scripts/common.sh@354 -- # echo 1 00:17:38.828 16:08:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:38.828 16:08:09 -- scripts/common.sh@365 -- # decimal 2 00:17:38.828 16:08:09 -- scripts/common.sh@352 -- # local d=2 00:17:38.828 16:08:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:38.828 16:08:09 -- scripts/common.sh@354 -- # echo 2 00:17:38.828 16:08:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:38.828 16:08:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:38.828 16:08:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:38.828 16:08:09 -- scripts/common.sh@367 -- # return 0 00:17:38.828 16:08:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:38.828 16:08:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:38.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.828 --rc genhtml_branch_coverage=1 00:17:38.828 --rc genhtml_function_coverage=1 00:17:38.828 --rc genhtml_legend=1 00:17:38.828 --rc geninfo_all_blocks=1 00:17:38.828 --rc geninfo_unexecuted_blocks=1 00:17:38.828 00:17:38.828 ' 00:17:38.828 16:08:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:38.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.828 --rc genhtml_branch_coverage=1 00:17:38.828 --rc genhtml_function_coverage=1 00:17:38.828 --rc genhtml_legend=1 00:17:38.828 --rc geninfo_all_blocks=1 00:17:38.828 --rc geninfo_unexecuted_blocks=1 00:17:38.828 00:17:38.828 ' 00:17:38.828 16:08:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:38.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.828 --rc genhtml_branch_coverage=1 00:17:38.828 --rc genhtml_function_coverage=1 00:17:38.828 --rc genhtml_legend=1 00:17:38.828 --rc geninfo_all_blocks=1 00:17:38.828 --rc geninfo_unexecuted_blocks=1 00:17:38.828 00:17:38.828 ' 00:17:38.828 16:08:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:38.828 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:38.828 --rc genhtml_branch_coverage=1 00:17:38.828 --rc genhtml_function_coverage=1 00:17:38.828 --rc genhtml_legend=1 00:17:38.828 --rc geninfo_all_blocks=1 00:17:38.828 --rc geninfo_unexecuted_blocks=1 00:17:38.828 00:17:38.828 ' 00:17:38.828 16:08:09 -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:38.828 16:08:09 -- nvmf/common.sh@7 -- # uname -s 00:17:38.828 16:08:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:38.828 16:08:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:38.828 16:08:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:38.828 16:08:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:38.828 16:08:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:38.828 16:08:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:38.828 16:08:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:38.828 16:08:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:38.828 16:08:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:38.828 16:08:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:38.828 16:08:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:38.828 16:08:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:38.828 16:08:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:38.828 16:08:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:38.828 16:08:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:38.828 16:08:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:38.828 16:08:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:38.828 16:08:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:38.828 16:08:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:38.828 16:08:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.828 16:08:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.828 16:08:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.828 16:08:09 -- paths/export.sh@5 -- # export PATH 00:17:38.828 16:08:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:38.829 16:08:09 -- nvmf/common.sh@46 -- # : 0 00:17:38.829 16:08:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:38.829 16:08:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:38.829 16:08:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:38.829 16:08:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:38.829 16:08:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:38.829 16:08:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:38.829 16:08:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:38.829 16:08:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:38.829 16:08:09 -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:38.829 16:08:09 -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:38.829 16:08:09 -- target/nvme_cli.sh@14 -- # devs=() 00:17:38.829 16:08:09 -- target/nvme_cli.sh@16 -- # nvmftestinit 00:17:38.829 16:08:09 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:38.829 16:08:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:38.829 16:08:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:38.829 16:08:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:38.829 16:08:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:38.829 16:08:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:38.829 16:08:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:38.829 16:08:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:38.829 16:08:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:38.829 16:08:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:38.829 16:08:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:38.829 16:08:09 -- common/autotest_common.sh@10 -- # set +x 00:17:45.399 16:08:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:45.399 16:08:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:45.399 16:08:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:45.399 16:08:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:45.399 16:08:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:45.399 16:08:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:45.399 16:08:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:45.399 16:08:16 -- nvmf/common.sh@294 -- # net_devs=() 00:17:45.399 16:08:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:45.399 16:08:16 -- nvmf/common.sh@295 -- # e810=() 00:17:45.399 16:08:16 -- nvmf/common.sh@295 -- # local -ga e810 00:17:45.399 16:08:16 -- nvmf/common.sh@296 -- # x722=() 00:17:45.399 16:08:16 -- nvmf/common.sh@296 -- # local -ga x722 00:17:45.399 16:08:16 -- nvmf/common.sh@297 -- # mlx=() 00:17:45.399 16:08:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:45.399 16:08:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:45.399 16:08:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:45.399 16:08:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:45.399 16:08:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:45.399 16:08:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:45.399 16:08:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:45.399 16:08:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:45.399 16:08:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:45.399 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:45.399 16:08:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:45.399 16:08:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:45.399 16:08:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:45.399 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:45.399 16:08:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:45.399 16:08:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:45.399 16:08:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:45.399 16:08:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.399 16:08:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:45.399 16:08:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.399 16:08:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:45.399 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:45.399 16:08:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.399 16:08:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:45.399 16:08:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:45.399 16:08:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:45.399 16:08:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:45.399 16:08:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:45.399 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:45.399 16:08:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:45.399 16:08:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:45.399 16:08:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:45.399 16:08:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:45.399 16:08:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:45.399 16:08:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:45.399 16:08:16 -- nvmf/common.sh@57 -- # uname 00:17:45.399 16:08:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:45.399 16:08:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:45.399 16:08:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:45.399 16:08:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:45.399 16:08:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:45.399 16:08:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:45.399 16:08:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:45.399 16:08:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:45.399 16:08:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:45.399 16:08:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:45.399 16:08:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:45.399 16:08:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:45.399 16:08:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:45.399 16:08:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:45.400 16:08:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:45.400 16:08:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:45.400 16:08:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:45.400 16:08:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:45.400 16:08:16 -- nvmf/common.sh@104 -- # continue 2 00:17:45.400 16:08:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:45.400 16:08:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:45.400 16:08:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:45.400 16:08:16 -- nvmf/common.sh@104 -- # continue 2 00:17:45.400 16:08:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:45.400 16:08:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:45.400 16:08:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:45.400 16:08:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:45.400 16:08:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.400 16:08:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.400 16:08:16 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:45.400 16:08:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:45.400 16:08:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:45.400 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:45.400 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:45.400 altname enp217s0f0np0 00:17:45.400 altname ens818f0np0 00:17:45.400 inet 192.168.100.8/24 scope global mlx_0_0 00:17:45.400 valid_lft forever preferred_lft forever 00:17:45.400 16:08:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:45.400 16:08:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:45.400 16:08:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:45.400 16:08:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:45.400 16:08:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.400 16:08:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.400 16:08:16 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:45.400 16:08:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:45.400 16:08:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:45.400 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:45.400 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:45.400 altname enp217s0f1np1 00:17:45.400 altname ens818f1np1 00:17:45.400 inet 192.168.100.9/24 scope global mlx_0_1 00:17:45.400 valid_lft forever preferred_lft forever 00:17:45.400 16:08:16 -- nvmf/common.sh@410 -- # return 0 00:17:45.400 16:08:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:45.400 16:08:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:45.400 16:08:16 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:45.400 16:08:16 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:45.400 16:08:16 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:45.400 16:08:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:45.400 16:08:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:45.400 16:08:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:45.400 16:08:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:45.400 16:08:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:45.400 16:08:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:45.400 16:08:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:45.400 16:08:16 -- nvmf/common.sh@104 -- # continue 2 00:17:45.400 16:08:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:45.400 16:08:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:45.400 16:08:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:45.400 16:08:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:45.400 16:08:16 -- nvmf/common.sh@104 -- # continue 2 00:17:45.400 16:08:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:45.400 16:08:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:45.400 16:08:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:45.660 16:08:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:45.660 16:08:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.660 16:08:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.660 16:08:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:45.660 16:08:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:45.660 16:08:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:45.660 16:08:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:45.660 16:08:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:45.660 16:08:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:45.660 16:08:16 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:45.660 192.168.100.9' 00:17:45.660 16:08:16 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:45.660 192.168.100.9' 00:17:45.660 16:08:16 -- nvmf/common.sh@445 -- # head -n 1 00:17:45.660 16:08:16 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:45.660 16:08:16 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:45.660 192.168.100.9' 00:17:45.660 16:08:16 -- nvmf/common.sh@446 -- # tail -n +2 00:17:45.660 16:08:16 -- nvmf/common.sh@446 -- # head -n 1 00:17:45.660 16:08:16 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:45.660 16:08:16 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:45.660 16:08:16 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:45.660 16:08:16 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:45.660 16:08:16 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:45.660 16:08:16 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:45.660 16:08:16 -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:17:45.660 16:08:16 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:45.660 16:08:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:45.660 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:17:45.660 16:08:16 -- nvmf/common.sh@469 -- # nvmfpid=1335847 00:17:45.660 16:08:16 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:45.660 16:08:16 -- nvmf/common.sh@470 -- # waitforlisten 1335847 00:17:45.660 16:08:16 -- common/autotest_common.sh@829 -- # '[' -z 1335847 ']' 00:17:45.660 16:08:16 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.660 16:08:16 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:45.660 16:08:16 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.660 16:08:16 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:45.660 16:08:16 -- common/autotest_common.sh@10 -- # set +x 00:17:45.660 [2024-11-20 16:08:16.323258] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:45.660 [2024-11-20 16:08:16.323309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:45.660 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.660 [2024-11-20 16:08:16.394322] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:45.660 [2024-11-20 16:08:16.432595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:45.660 [2024-11-20 16:08:16.432707] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:45.660 [2024-11-20 16:08:16.432717] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:45.660 [2024-11-20 16:08:16.432726] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:45.660 [2024-11-20 16:08:16.432769] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:45.660 [2024-11-20 16:08:16.432863] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:45.661 [2024-11-20 16:08:16.432949] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:45.661 [2024-11-20 16:08:16.432951] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.644 16:08:17 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:46.644 16:08:17 -- common/autotest_common.sh@862 -- # return 0 00:17:46.644 16:08:17 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:46.644 16:08:17 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:46.644 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:46.644 16:08:17 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:46.644 16:08:17 -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:46.644 16:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.644 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:46.644 [2024-11-20 16:08:17.217969] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x94a0d0/0x94e5a0) succeed. 00:17:46.644 [2024-11-20 16:08:17.227133] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x94b670/0x98fc40) succeed. 00:17:46.644 16:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.644 16:08:17 -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:46.644 16:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.644 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:46.644 Malloc0 00:17:46.644 16:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.644 16:08:17 -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:17:46.644 16:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.644 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:46.644 Malloc1 00:17:46.644 16:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.644 16:08:17 -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:17:46.645 16:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.645 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:46.645 16:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.645 16:08:17 -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:46.645 16:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.645 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:46.645 16:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.645 16:08:17 -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:46.645 16:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.645 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:46.645 16:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.645 16:08:17 -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:17:46.645 16:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.645 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:46.645 [2024-11-20 16:08:17.423781] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:46.645 16:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.645 16:08:17 -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:17:46.645 16:08:17 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.645 16:08:17 -- common/autotest_common.sh@10 -- # set +x 00:17:46.645 16:08:17 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.645 16:08:17 -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -a 192.168.100.8 -s 4420 00:17:46.976 00:17:46.976 Discovery Log Number of Records 2, Generation counter 2 00:17:46.976 =====Discovery Log Entry 0====== 00:17:46.976 trtype: rdma 00:17:46.976 adrfam: ipv4 00:17:46.976 subtype: current discovery subsystem 00:17:46.977 treq: not required 00:17:46.977 portid: 0 00:17:46.977 trsvcid: 4420 00:17:46.977 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:17:46.977 traddr: 192.168.100.8 00:17:46.977 eflags: explicit discovery connections, duplicate discovery information 00:17:46.977 rdma_prtype: not specified 00:17:46.977 rdma_qptype: connected 00:17:46.977 rdma_cms: rdma-cm 00:17:46.977 rdma_pkey: 0x0000 00:17:46.977 =====Discovery Log Entry 1====== 00:17:46.977 trtype: rdma 00:17:46.977 adrfam: ipv4 00:17:46.977 subtype: nvme subsystem 00:17:46.977 treq: not required 00:17:46.977 portid: 0 00:17:46.977 trsvcid: 4420 00:17:46.977 subnqn: nqn.2016-06.io.spdk:cnode1 00:17:46.977 traddr: 192.168.100.8 00:17:46.977 eflags: none 00:17:46.977 rdma_prtype: not specified 00:17:46.977 rdma_qptype: connected 00:17:46.977 rdma_cms: rdma-cm 00:17:46.977 rdma_pkey: 0x0000 00:17:46.977 16:08:17 -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:17:46.977 16:08:17 -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:17:46.977 16:08:17 -- nvmf/common.sh@510 -- # local dev _ 00:17:46.977 16:08:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:46.977 16:08:17 -- nvmf/common.sh@509 -- # nvme list 00:17:46.977 16:08:17 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:46.977 16:08:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:46.977 16:08:17 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:46.977 16:08:17 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:46.977 16:08:17 -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:17:46.977 16:08:17 -- target/nvme_cli.sh@32 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:17:47.913 16:08:18 -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:17:47.913 16:08:18 -- common/autotest_common.sh@1187 -- # local i=0 00:17:47.913 16:08:18 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:17:47.913 16:08:18 -- common/autotest_common.sh@1189 -- # [[ -n 2 ]] 00:17:47.913 16:08:18 -- common/autotest_common.sh@1190 -- # nvme_device_counter=2 00:17:47.914 16:08:18 -- common/autotest_common.sh@1194 -- # sleep 2 00:17:49.818 16:08:20 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:17:49.818 16:08:20 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:17:49.818 16:08:20 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:17:49.818 16:08:20 -- common/autotest_common.sh@1196 -- # nvme_devices=2 00:17:49.818 16:08:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:17:49.818 16:08:20 -- common/autotest_common.sh@1197 -- # return 0 00:17:49.818 16:08:20 -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:17:49.818 16:08:20 -- nvmf/common.sh@510 -- # local dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@509 -- # nvme list 00:17:49.818 16:08:20 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:49.818 16:08:20 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:49.818 16:08:20 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:17:49.818 /dev/nvme0n2 ]] 00:17:49.818 16:08:20 -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:17:49.818 16:08:20 -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:17:49.818 16:08:20 -- nvmf/common.sh@510 -- # local dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@509 -- # nvme list 00:17:49.818 16:08:20 -- nvmf/common.sh@513 -- # [[ Node == /dev/nvme* ]] 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@513 -- # [[ --------------------- == /dev/nvme* ]] 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:17:49.818 16:08:20 -- nvmf/common.sh@514 -- # echo /dev/nvme0n1 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- nvmf/common.sh@513 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:17:49.818 16:08:20 -- nvmf/common.sh@514 -- # echo /dev/nvme0n2 00:17:49.818 16:08:20 -- nvmf/common.sh@512 -- # read -r dev _ 00:17:49.818 16:08:20 -- target/nvme_cli.sh@59 -- # nvme_num=2 00:17:49.818 16:08:20 -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:50.756 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:51.015 16:08:21 -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:51.015 16:08:21 -- common/autotest_common.sh@1208 -- # local i=0 00:17:51.015 16:08:21 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:17:51.015 16:08:21 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.015 16:08:21 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:17:51.015 16:08:21 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:51.015 16:08:21 -- common/autotest_common.sh@1220 -- # return 0 00:17:51.015 16:08:21 -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:17:51.015 16:08:21 -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:51.015 16:08:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.015 16:08:21 -- common/autotest_common.sh@10 -- # set +x 00:17:51.015 16:08:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.015 16:08:21 -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:17:51.015 16:08:21 -- target/nvme_cli.sh@70 -- # nvmftestfini 00:17:51.015 16:08:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:17:51.015 16:08:21 -- nvmf/common.sh@116 -- # sync 00:17:51.015 16:08:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:17:51.015 16:08:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:17:51.015 16:08:21 -- nvmf/common.sh@119 -- # set +e 00:17:51.015 16:08:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:17:51.015 16:08:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:17:51.015 rmmod nvme_rdma 00:17:51.015 rmmod nvme_fabrics 00:17:51.015 16:08:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:17:51.015 16:08:21 -- nvmf/common.sh@123 -- # set -e 00:17:51.015 16:08:21 -- nvmf/common.sh@124 -- # return 0 00:17:51.015 16:08:21 -- nvmf/common.sh@477 -- # '[' -n 1335847 ']' 00:17:51.015 16:08:21 -- nvmf/common.sh@478 -- # killprocess 1335847 00:17:51.015 16:08:21 -- common/autotest_common.sh@936 -- # '[' -z 1335847 ']' 00:17:51.015 16:08:21 -- common/autotest_common.sh@940 -- # kill -0 1335847 00:17:51.015 16:08:21 -- common/autotest_common.sh@941 -- # uname 00:17:51.015 16:08:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:51.015 16:08:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1335847 00:17:51.016 16:08:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:51.016 16:08:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:51.016 16:08:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1335847' 00:17:51.016 killing process with pid 1335847 00:17:51.016 16:08:21 -- common/autotest_common.sh@955 -- # kill 1335847 00:17:51.016 16:08:21 -- common/autotest_common.sh@960 -- # wait 1335847 00:17:51.275 16:08:22 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:17:51.275 16:08:22 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:17:51.275 00:17:51.275 real 0m12.656s 00:17:51.275 user 0m24.100s 00:17:51.275 sys 0m5.781s 00:17:51.275 16:08:22 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:17:51.275 16:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:51.275 ************************************ 00:17:51.275 END TEST nvmf_nvme_cli 00:17:51.275 ************************************ 00:17:51.275 16:08:22 -- nvmf/nvmf.sh@39 -- # [[ 0 -eq 1 ]] 00:17:51.275 16:08:22 -- nvmf/nvmf.sh@46 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:51.276 16:08:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:17:51.276 16:08:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:51.276 16:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:51.276 ************************************ 00:17:51.276 START TEST nvmf_host_management 00:17:51.276 ************************************ 00:17:51.276 16:08:22 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=rdma 00:17:51.536 * Looking for test storage... 00:17:51.536 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:17:51.536 16:08:22 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:17:51.536 16:08:22 -- common/autotest_common.sh@1690 -- # lcov --version 00:17:51.536 16:08:22 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:17:51.536 16:08:22 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:17:51.536 16:08:22 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:17:51.536 16:08:22 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:17:51.536 16:08:22 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:17:51.536 16:08:22 -- scripts/common.sh@335 -- # IFS=.-: 00:17:51.536 16:08:22 -- scripts/common.sh@335 -- # read -ra ver1 00:17:51.536 16:08:22 -- scripts/common.sh@336 -- # IFS=.-: 00:17:51.536 16:08:22 -- scripts/common.sh@336 -- # read -ra ver2 00:17:51.536 16:08:22 -- scripts/common.sh@337 -- # local 'op=<' 00:17:51.536 16:08:22 -- scripts/common.sh@339 -- # ver1_l=2 00:17:51.536 16:08:22 -- scripts/common.sh@340 -- # ver2_l=1 00:17:51.536 16:08:22 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:17:51.536 16:08:22 -- scripts/common.sh@343 -- # case "$op" in 00:17:51.536 16:08:22 -- scripts/common.sh@344 -- # : 1 00:17:51.536 16:08:22 -- scripts/common.sh@363 -- # (( v = 0 )) 00:17:51.536 16:08:22 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:51.536 16:08:22 -- scripts/common.sh@364 -- # decimal 1 00:17:51.536 16:08:22 -- scripts/common.sh@352 -- # local d=1 00:17:51.536 16:08:22 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:51.536 16:08:22 -- scripts/common.sh@354 -- # echo 1 00:17:51.536 16:08:22 -- scripts/common.sh@364 -- # ver1[v]=1 00:17:51.536 16:08:22 -- scripts/common.sh@365 -- # decimal 2 00:17:51.536 16:08:22 -- scripts/common.sh@352 -- # local d=2 00:17:51.536 16:08:22 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:51.536 16:08:22 -- scripts/common.sh@354 -- # echo 2 00:17:51.536 16:08:22 -- scripts/common.sh@365 -- # ver2[v]=2 00:17:51.536 16:08:22 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:17:51.536 16:08:22 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:17:51.536 16:08:22 -- scripts/common.sh@367 -- # return 0 00:17:51.536 16:08:22 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:51.536 16:08:22 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:17:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.536 --rc genhtml_branch_coverage=1 00:17:51.536 --rc genhtml_function_coverage=1 00:17:51.536 --rc genhtml_legend=1 00:17:51.536 --rc geninfo_all_blocks=1 00:17:51.536 --rc geninfo_unexecuted_blocks=1 00:17:51.536 00:17:51.536 ' 00:17:51.536 16:08:22 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:17:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.536 --rc genhtml_branch_coverage=1 00:17:51.536 --rc genhtml_function_coverage=1 00:17:51.536 --rc genhtml_legend=1 00:17:51.536 --rc geninfo_all_blocks=1 00:17:51.536 --rc geninfo_unexecuted_blocks=1 00:17:51.536 00:17:51.536 ' 00:17:51.536 16:08:22 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:17:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.536 --rc genhtml_branch_coverage=1 00:17:51.536 --rc genhtml_function_coverage=1 00:17:51.536 --rc genhtml_legend=1 00:17:51.536 --rc geninfo_all_blocks=1 00:17:51.536 --rc geninfo_unexecuted_blocks=1 00:17:51.536 00:17:51.536 ' 00:17:51.536 16:08:22 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:17:51.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:51.536 --rc genhtml_branch_coverage=1 00:17:51.536 --rc genhtml_function_coverage=1 00:17:51.536 --rc genhtml_legend=1 00:17:51.536 --rc geninfo_all_blocks=1 00:17:51.536 --rc geninfo_unexecuted_blocks=1 00:17:51.536 00:17:51.536 ' 00:17:51.536 16:08:22 -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:17:51.536 16:08:22 -- nvmf/common.sh@7 -- # uname -s 00:17:51.536 16:08:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:51.536 16:08:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:51.536 16:08:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:51.536 16:08:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:51.536 16:08:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:51.536 16:08:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:51.536 16:08:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:51.536 16:08:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:51.536 16:08:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:51.536 16:08:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:51.536 16:08:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:17:51.536 16:08:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:17:51.536 16:08:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:51.536 16:08:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:51.536 16:08:22 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:51.536 16:08:22 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:17:51.536 16:08:22 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:51.536 16:08:22 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:51.536 16:08:22 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:51.536 16:08:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.536 16:08:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.536 16:08:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.536 16:08:22 -- paths/export.sh@5 -- # export PATH 00:17:51.537 16:08:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:51.537 16:08:22 -- nvmf/common.sh@46 -- # : 0 00:17:51.537 16:08:22 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:17:51.537 16:08:22 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:17:51.537 16:08:22 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:17:51.537 16:08:22 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:51.537 16:08:22 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:51.537 16:08:22 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:17:51.537 16:08:22 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:17:51.537 16:08:22 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:17:51.537 16:08:22 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:51.537 16:08:22 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:51.537 16:08:22 -- target/host_management.sh@104 -- # nvmftestinit 00:17:51.537 16:08:22 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:17:51.537 16:08:22 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:51.537 16:08:22 -- nvmf/common.sh@436 -- # prepare_net_devs 00:17:51.537 16:08:22 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:17:51.537 16:08:22 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:17:51.537 16:08:22 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:51.537 16:08:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:51.537 16:08:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:51.537 16:08:22 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:17:51.537 16:08:22 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:17:51.537 16:08:22 -- nvmf/common.sh@284 -- # xtrace_disable 00:17:51.537 16:08:22 -- common/autotest_common.sh@10 -- # set +x 00:17:58.105 16:08:27 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:17:58.105 16:08:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:17:58.105 16:08:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:17:58.105 16:08:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:17:58.105 16:08:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:17:58.105 16:08:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:17:58.105 16:08:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:17:58.105 16:08:28 -- nvmf/common.sh@294 -- # net_devs=() 00:17:58.105 16:08:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:17:58.105 16:08:28 -- nvmf/common.sh@295 -- # e810=() 00:17:58.105 16:08:28 -- nvmf/common.sh@295 -- # local -ga e810 00:17:58.105 16:08:28 -- nvmf/common.sh@296 -- # x722=() 00:17:58.105 16:08:28 -- nvmf/common.sh@296 -- # local -ga x722 00:17:58.105 16:08:28 -- nvmf/common.sh@297 -- # mlx=() 00:17:58.105 16:08:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:17:58.105 16:08:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:58.105 16:08:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:17:58.105 16:08:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:17:58.105 16:08:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:17:58.105 16:08:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:17:58.106 16:08:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:17:58.106 16:08:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:17:58.106 16:08:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:17:58.106 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:17:58.106 16:08:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:58.106 16:08:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:17:58.106 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:17:58.106 16:08:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:17:58.106 16:08:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:17:58.106 16:08:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.106 16:08:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:58.106 16:08:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.106 16:08:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:17:58.106 Found net devices under 0000:d9:00.0: mlx_0_0 00:17:58.106 16:08:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.106 16:08:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:58.106 16:08:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:17:58.106 16:08:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:58.106 16:08:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:17:58.106 Found net devices under 0000:d9:00.1: mlx_0_1 00:17:58.106 16:08:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:17:58.106 16:08:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:17:58.106 16:08:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:17:58.106 16:08:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:17:58.106 16:08:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:17:58.106 16:08:28 -- nvmf/common.sh@57 -- # uname 00:17:58.106 16:08:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:17:58.106 16:08:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:17:58.106 16:08:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:17:58.106 16:08:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:17:58.106 16:08:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:17:58.106 16:08:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:17:58.106 16:08:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:17:58.106 16:08:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:17:58.106 16:08:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:17:58.106 16:08:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:17:58.106 16:08:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:17:58.106 16:08:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:58.106 16:08:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:58.106 16:08:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:58.106 16:08:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:58.106 16:08:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:58.106 16:08:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:58.106 16:08:28 -- nvmf/common.sh@104 -- # continue 2 00:17:58.106 16:08:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:58.106 16:08:28 -- nvmf/common.sh@104 -- # continue 2 00:17:58.106 16:08:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:58.106 16:08:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:17:58.106 16:08:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:58.106 16:08:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:17:58.106 16:08:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:17:58.106 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:58.106 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:17:58.106 altname enp217s0f0np0 00:17:58.106 altname ens818f0np0 00:17:58.106 inet 192.168.100.8/24 scope global mlx_0_0 00:17:58.106 valid_lft forever preferred_lft forever 00:17:58.106 16:08:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:17:58.106 16:08:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:17:58.106 16:08:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:58.106 16:08:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:17:58.106 16:08:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:17:58.106 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:17:58.106 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:17:58.106 altname enp217s0f1np1 00:17:58.106 altname ens818f1np1 00:17:58.106 inet 192.168.100.9/24 scope global mlx_0_1 00:17:58.106 valid_lft forever preferred_lft forever 00:17:58.106 16:08:28 -- nvmf/common.sh@410 -- # return 0 00:17:58.106 16:08:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:17:58.106 16:08:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:17:58.106 16:08:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:17:58.106 16:08:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:17:58.106 16:08:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:17:58.106 16:08:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:17:58.106 16:08:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:17:58.106 16:08:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:17:58.106 16:08:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:17:58.106 16:08:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:17:58.106 16:08:28 -- nvmf/common.sh@104 -- # continue 2 00:17:58.106 16:08:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:17:58.106 16:08:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:17:58.106 16:08:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:17:58.106 16:08:28 -- nvmf/common.sh@104 -- # continue 2 00:17:58.106 16:08:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:58.106 16:08:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:17:58.106 16:08:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:58.106 16:08:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:17:58.106 16:08:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:17:58.106 16:08:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:17:58.106 16:08:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:17:58.106 16:08:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:17:58.106 192.168.100.9' 00:17:58.106 16:08:28 -- nvmf/common.sh@445 -- # head -n 1 00:17:58.106 16:08:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:17:58.106 192.168.100.9' 00:17:58.106 16:08:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:17:58.106 16:08:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:17:58.106 192.168.100.9' 00:17:58.106 16:08:28 -- nvmf/common.sh@446 -- # tail -n +2 00:17:58.106 16:08:28 -- nvmf/common.sh@446 -- # head -n 1 00:17:58.106 16:08:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:17:58.106 16:08:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:17:58.106 16:08:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:17:58.106 16:08:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:17:58.106 16:08:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:17:58.106 16:08:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:17:58.106 16:08:28 -- target/host_management.sh@106 -- # run_test nvmf_host_management nvmf_host_management 00:17:58.106 16:08:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:58.106 16:08:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:58.106 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.106 ************************************ 00:17:58.106 START TEST nvmf_host_management 00:17:58.106 ************************************ 00:17:58.106 16:08:28 -- common/autotest_common.sh@1114 -- # nvmf_host_management 00:17:58.106 16:08:28 -- target/host_management.sh@69 -- # starttarget 00:17:58.106 16:08:28 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:17:58.106 16:08:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:17:58.106 16:08:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.106 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.106 16:08:28 -- nvmf/common.sh@469 -- # nvmfpid=1340155 00:17:58.106 16:08:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:17:58.106 16:08:28 -- nvmf/common.sh@470 -- # waitforlisten 1340155 00:17:58.106 16:08:28 -- common/autotest_common.sh@829 -- # '[' -z 1340155 ']' 00:17:58.106 16:08:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.106 16:08:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.106 16:08:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.106 16:08:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.106 16:08:28 -- common/autotest_common.sh@10 -- # set +x 00:17:58.106 [2024-11-20 16:08:28.293999] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:58.106 [2024-11-20 16:08:28.294048] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.106 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.106 [2024-11-20 16:08:28.365057] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:58.106 [2024-11-20 16:08:28.402221] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:17:58.106 [2024-11-20 16:08:28.402340] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.106 [2024-11-20 16:08:28.402350] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.106 [2024-11-20 16:08:28.402360] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.106 [2024-11-20 16:08:28.402409] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.106 [2024-11-20 16:08:28.402493] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:58.106 [2024-11-20 16:08:28.402606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.106 [2024-11-20 16:08:28.402607] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:17:58.363 16:08:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:58.363 16:08:29 -- common/autotest_common.sh@862 -- # return 0 00:17:58.363 16:08:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:17:58.363 16:08:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.363 16:08:29 -- common/autotest_common.sh@10 -- # set +x 00:17:58.363 16:08:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:58.363 16:08:29 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:17:58.363 16:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.363 16:08:29 -- common/autotest_common.sh@10 -- # set +x 00:17:58.621 [2024-11-20 16:08:29.186767] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xfbc3c0/0xfc0890) succeed. 00:17:58.621 [2024-11-20 16:08:29.196017] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xfbd960/0x1001f30) succeed. 00:17:58.621 16:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.621 16:08:29 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:17:58.621 16:08:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:58.621 16:08:29 -- common/autotest_common.sh@10 -- # set +x 00:17:58.621 16:08:29 -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:17:58.621 16:08:29 -- target/host_management.sh@23 -- # cat 00:17:58.621 16:08:29 -- target/host_management.sh@30 -- # rpc_cmd 00:17:58.621 16:08:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.621 16:08:29 -- common/autotest_common.sh@10 -- # set +x 00:17:58.621 Malloc0 00:17:58.621 [2024-11-20 16:08:29.373491] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:17:58.621 16:08:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.621 16:08:29 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:17:58.621 16:08:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:58.621 16:08:29 -- common/autotest_common.sh@10 -- # set +x 00:17:58.881 16:08:29 -- target/host_management.sh@73 -- # perfpid=1340386 00:17:58.881 16:08:29 -- target/host_management.sh@74 -- # waitforlisten 1340386 /var/tmp/bdevperf.sock 00:17:58.881 16:08:29 -- common/autotest_common.sh@829 -- # '[' -z 1340386 ']' 00:17:58.881 16:08:29 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.881 16:08:29 -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:58.881 16:08:29 -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:17:58.881 16:08:29 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:17:58.881 16:08:29 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.881 16:08:29 -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:58.881 16:08:29 -- nvmf/common.sh@520 -- # config=() 00:17:58.881 16:08:29 -- common/autotest_common.sh@10 -- # set +x 00:17:58.881 16:08:29 -- nvmf/common.sh@520 -- # local subsystem config 00:17:58.881 16:08:29 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:17:58.881 16:08:29 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:17:58.881 { 00:17:58.881 "params": { 00:17:58.881 "name": "Nvme$subsystem", 00:17:58.881 "trtype": "$TEST_TRANSPORT", 00:17:58.881 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:58.881 "adrfam": "ipv4", 00:17:58.881 "trsvcid": "$NVMF_PORT", 00:17:58.881 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:58.881 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:58.881 "hdgst": ${hdgst:-false}, 00:17:58.881 "ddgst": ${ddgst:-false} 00:17:58.881 }, 00:17:58.881 "method": "bdev_nvme_attach_controller" 00:17:58.881 } 00:17:58.881 EOF 00:17:58.881 )") 00:17:58.881 16:08:29 -- nvmf/common.sh@542 -- # cat 00:17:58.881 16:08:29 -- nvmf/common.sh@544 -- # jq . 00:17:58.881 16:08:29 -- nvmf/common.sh@545 -- # IFS=, 00:17:58.881 16:08:29 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:17:58.881 "params": { 00:17:58.881 "name": "Nvme0", 00:17:58.881 "trtype": "rdma", 00:17:58.881 "traddr": "192.168.100.8", 00:17:58.881 "adrfam": "ipv4", 00:17:58.881 "trsvcid": "4420", 00:17:58.881 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:58.881 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:17:58.881 "hdgst": false, 00:17:58.881 "ddgst": false 00:17:58.881 }, 00:17:58.881 "method": "bdev_nvme_attach_controller" 00:17:58.881 }' 00:17:58.881 [2024-11-20 16:08:29.475455] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:17:58.881 [2024-11-20 16:08:29.475507] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340386 ] 00:17:58.881 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.881 [2024-11-20 16:08:29.547883] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.881 [2024-11-20 16:08:29.584396] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.140 Running I/O for 10 seconds... 00:17:59.708 16:08:30 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:59.708 16:08:30 -- common/autotest_common.sh@862 -- # return 0 00:17:59.708 16:08:30 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:17:59.708 16:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.708 16:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:59.708 16:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.708 16:08:30 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:59.708 16:08:30 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:17:59.708 16:08:30 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:17:59.709 16:08:30 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:17:59.709 16:08:30 -- target/host_management.sh@52 -- # local ret=1 00:17:59.709 16:08:30 -- target/host_management.sh@53 -- # local i 00:17:59.709 16:08:30 -- target/host_management.sh@54 -- # (( i = 10 )) 00:17:59.709 16:08:30 -- target/host_management.sh@54 -- # (( i != 0 )) 00:17:59.709 16:08:30 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:17:59.709 16:08:30 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:17:59.709 16:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.709 16:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:59.709 16:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.709 16:08:30 -- target/host_management.sh@55 -- # read_io_count=3194 00:17:59.709 16:08:30 -- target/host_management.sh@58 -- # '[' 3194 -ge 100 ']' 00:17:59.709 16:08:30 -- target/host_management.sh@59 -- # ret=0 00:17:59.709 16:08:30 -- target/host_management.sh@60 -- # break 00:17:59.709 16:08:30 -- target/host_management.sh@64 -- # return 0 00:17:59.709 16:08:30 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:59.709 16:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.709 16:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:59.709 16:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.709 16:08:30 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:17:59.709 16:08:30 -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.709 16:08:30 -- common/autotest_common.sh@10 -- # set +x 00:17:59.709 16:08:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.709 16:08:30 -- target/host_management.sh@87 -- # sleep 1 00:18:00.646 [2024-11-20 16:08:31.370899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:41600 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190afd80 len:0x10000 key:0x182600 00:18:00.646 [2024-11-20 16:08:31.370934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.370953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:41728 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e4fa80 len:0x10000 key:0x182500 00:18:00.646 [2024-11-20 16:08:31.370963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.370974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:41856 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018eeff80 len:0x10000 key:0x182500 00:18:00.646 [2024-11-20 16:08:31.370984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.370994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:41984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190cfe80 len:0x10000 key:0x182600 00:18:00.646 [2024-11-20 16:08:31.371003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e1f900 len:0x10000 key:0x182500 00:18:00.646 [2024-11-20 16:08:31.371024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019231980 len:0x10000 key:0x182700 00:18:00.646 [2024-11-20 16:08:31.371043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:42368 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001903fa00 len:0x10000 key:0x182600 00:18:00.646 [2024-11-20 16:08:31.371063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019241a00 len:0x10000 key:0x182700 00:18:00.646 [2024-11-20 16:08:31.371089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42624 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019211880 len:0x10000 key:0x182700 00:18:00.646 [2024-11-20 16:08:31.371109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:42752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a4b040 len:0x10000 key:0x182000 00:18:00.646 [2024-11-20 16:08:31.371129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001384a480 len:0x10000 key:0x182400 00:18:00.646 [2024-11-20 16:08:31.371151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:43008 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a1aec0 len:0x10000 key:0x182000 00:18:00.646 [2024-11-20 16:08:31.371172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:43136 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001387a600 len:0x10000 key:0x182400 00:18:00.646 [2024-11-20 16:08:31.371191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:43264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192b1d80 len:0x10000 key:0x182700 00:18:00.646 [2024-11-20 16:08:31.371210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:43392 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001900f880 len:0x10000 key:0x182600 00:18:00.646 [2024-11-20 16:08:31.371231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:43520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001901f900 len:0x10000 key:0x182600 00:18:00.646 [2024-11-20 16:08:31.371253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:43648 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e8fc80 len:0x10000 key:0x182500 00:18:00.646 [2024-11-20 16:08:31.371274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:43776 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019281c00 len:0x10000 key:0x182700 00:18:00.646 [2024-11-20 16:08:31.371295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:43904 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001383a400 len:0x10000 key:0x182400 00:18:00.646 [2024-11-20 16:08:31.371316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.646 [2024-11-20 16:08:31.371330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44032 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001388a680 len:0x10000 key:0x182400 00:18:00.647 [2024-11-20 16:08:31.371340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:44160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ba800 len:0x10000 key:0x182400 00:18:00.647 [2024-11-20 16:08:31.371362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:44288 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001902f980 len:0x10000 key:0x182600 00:18:00.647 [2024-11-20 16:08:31.371382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:44416 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001904fa80 len:0x10000 key:0x182600 00:18:00.647 [2024-11-20 16:08:31.371402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:44544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x182500 00:18:00.647 [2024-11-20 16:08:31.371421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:44672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ecfe80 len:0x10000 key:0x182500 00:18:00.647 [2024-11-20 16:08:31.371441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:44800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003a2af40 len:0x10000 key:0x182000 00:18:00.647 [2024-11-20 16:08:31.371460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:44928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019291c80 len:0x10000 key:0x182700 00:18:00.647 [2024-11-20 16:08:31.371480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:45056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca880 len:0x10000 key:0x182400 00:18:00.647 [2024-11-20 16:08:31.371501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:45184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019271b80 len:0x10000 key:0x182700 00:18:00.647 [2024-11-20 16:08:31.371525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192a1d00 len:0x10000 key:0x182700 00:18:00.647 [2024-11-20 16:08:31.371544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45440 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e0f880 len:0x10000 key:0x182500 00:18:00.647 [2024-11-20 16:08:31.371566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:45568 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001908fc80 len:0x10000 key:0x182600 00:18:00.647 [2024-11-20 16:08:31.371586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:45696 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x182400 00:18:00.647 [2024-11-20 16:08:31.371605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x182400 00:18:00.647 [2024-11-20 16:08:31.371630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:45952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000192d1e80 len:0x10000 key:0x182700 00:18:00.647 [2024-11-20 16:08:31.371650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:46080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001905fb00 len:0x10000 key:0x182600 00:18:00.647 [2024-11-20 16:08:31.371669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:46208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190dff00 len:0x10000 key:0x182600 00:18:00.647 [2024-11-20 16:08:31.371688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001386a580 len:0x10000 key:0x182400 00:18:00.647 [2024-11-20 16:08:31.371708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018ebfe00 len:0x10000 key:0x182500 00:18:00.647 [2024-11-20 16:08:31.371728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:46592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea980 len:0x10000 key:0x182400 00:18:00.647 [2024-11-20 16:08:31.371747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:46720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019201800 len:0x10000 key:0x182700 00:18:00.647 [2024-11-20 16:08:31.371766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000190eff80 len:0x10000 key:0x182600 00:18:00.647 [2024-11-20 16:08:31.371787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:46976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x182600 00:18:00.647 [2024-11-20 16:08:31.371807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:47104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001909fd00 len:0x10000 key:0x182600 00:18:00.647 [2024-11-20 16:08:31.371826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:36352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc5d000 len:0x10000 key:0x182300 00:18:00.647 [2024-11-20 16:08:31.371845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:36480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc3c000 len:0x10000 key:0x182300 00:18:00.647 [2024-11-20 16:08:31.371864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:36608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cc1b000 len:0x10000 key:0x182300 00:18:00.647 [2024-11-20 16:08:31.371884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.647 [2024-11-20 16:08:31.371894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:36736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.371903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.371913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:36864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.371922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.371934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbb8000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.371943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.371954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:37760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cab0000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.371962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.371973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:38016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ceaf000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.371981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.371994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:38400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce8e000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce6d000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:39424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce4c000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce2b000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:40064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ce0a000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:40320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cde9000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cdc8000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:40576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:40832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:41216 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.372207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:41344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x182300 00:18:00.648 [2024-11-20 16:08:31.372220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53749 cdw0:393a4000 sqhd:7a00 p:0 m:0 dnr:0 00:18:00.648 [2024-11-20 16:08:31.374049] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192015c0 was disconnected and freed. reset controller. 00:18:00.648 [2024-11-20 16:08:31.374927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:18:00.648 task offset: 41600 on job bdev=Nvme0n1 fails 00:18:00.648 00:18:00.648 Latency(us) 00:18:00.648 [2024-11-20T15:08:31.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:00.648 [2024-11-20T15:08:31.453Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:00.648 [2024-11-20T15:08:31.453Z] Job: Nvme0n1 ended in about 1.62 seconds with error 00:18:00.648 Verification LBA range: start 0x0 length 0x400 00:18:00.648 Nvme0n1 : 1.62 2089.81 130.61 39.61 0.00 29862.33 3460.30 1013343.85 00:18:00.648 [2024-11-20T15:08:31.453Z] =================================================================================================================== 00:18:00.648 [2024-11-20T15:08:31.453Z] Total : 2089.81 130.61 39.61 0.00 29862.33 3460.30 1013343.85 00:18:00.648 [2024-11-20 16:08:31.376576] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:00.648 16:08:31 -- target/host_management.sh@91 -- # kill -9 1340386 00:18:00.648 16:08:31 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:18:00.648 16:08:31 -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:18:00.648 16:08:31 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:18:00.648 16:08:31 -- nvmf/common.sh@520 -- # config=() 00:18:00.648 16:08:31 -- nvmf/common.sh@520 -- # local subsystem config 00:18:00.648 16:08:31 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:18:00.648 16:08:31 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:18:00.648 { 00:18:00.648 "params": { 00:18:00.648 "name": "Nvme$subsystem", 00:18:00.648 "trtype": "$TEST_TRANSPORT", 00:18:00.648 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:00.648 "adrfam": "ipv4", 00:18:00.648 "trsvcid": "$NVMF_PORT", 00:18:00.648 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:00.648 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:00.648 "hdgst": ${hdgst:-false}, 00:18:00.648 "ddgst": ${ddgst:-false} 00:18:00.648 }, 00:18:00.648 "method": "bdev_nvme_attach_controller" 00:18:00.648 } 00:18:00.648 EOF 00:18:00.648 )") 00:18:00.648 16:08:31 -- nvmf/common.sh@542 -- # cat 00:18:00.648 16:08:31 -- nvmf/common.sh@544 -- # jq . 00:18:00.648 16:08:31 -- nvmf/common.sh@545 -- # IFS=, 00:18:00.648 16:08:31 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:18:00.648 "params": { 00:18:00.648 "name": "Nvme0", 00:18:00.648 "trtype": "rdma", 00:18:00.648 "traddr": "192.168.100.8", 00:18:00.648 "adrfam": "ipv4", 00:18:00.648 "trsvcid": "4420", 00:18:00.648 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:00.648 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:18:00.648 "hdgst": false, 00:18:00.648 "ddgst": false 00:18:00.648 }, 00:18:00.648 "method": "bdev_nvme_attach_controller" 00:18:00.649 }' 00:18:00.649 [2024-11-20 16:08:31.430851] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:00.649 [2024-11-20 16:08:31.430900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1340749 ] 00:18:00.907 EAL: No free 2048 kB hugepages reported on node 1 00:18:00.907 [2024-11-20 16:08:31.501269] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:00.907 [2024-11-20 16:08:31.538175] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:00.907 Running I/O for 1 seconds... 00:18:02.287 00:18:02.287 Latency(us) 00:18:02.287 [2024-11-20T15:08:33.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:02.287 [2024-11-20T15:08:33.092Z] Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:18:02.287 Verification LBA range: start 0x0 length 0x400 00:18:02.287 Nvme0n1 : 1.01 5598.58 349.91 0.00 0.00 11259.14 524.29 24641.54 00:18:02.287 [2024-11-20T15:08:33.092Z] =================================================================================================================== 00:18:02.287 [2024-11-20T15:08:33.092Z] Total : 5598.58 349.91 0.00 0.00 11259.14 524.29 24641.54 00:18:02.287 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 68: 1340386 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "0") -q 64 -o 65536 -w verify -t 10 "${NO_HUGE[@]}" 00:18:02.287 16:08:32 -- target/host_management.sh@101 -- # stoptarget 00:18:02.287 16:08:32 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:18:02.287 16:08:32 -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:18:02.287 16:08:32 -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:18:02.287 16:08:32 -- target/host_management.sh@40 -- # nvmftestfini 00:18:02.287 16:08:32 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:02.287 16:08:32 -- nvmf/common.sh@116 -- # sync 00:18:02.287 16:08:32 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:02.287 16:08:32 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:02.287 16:08:32 -- nvmf/common.sh@119 -- # set +e 00:18:02.287 16:08:32 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:02.287 16:08:32 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:02.287 rmmod nvme_rdma 00:18:02.287 rmmod nvme_fabrics 00:18:02.287 16:08:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:02.287 16:08:32 -- nvmf/common.sh@123 -- # set -e 00:18:02.287 16:08:32 -- nvmf/common.sh@124 -- # return 0 00:18:02.287 16:08:32 -- nvmf/common.sh@477 -- # '[' -n 1340155 ']' 00:18:02.287 16:08:32 -- nvmf/common.sh@478 -- # killprocess 1340155 00:18:02.287 16:08:32 -- common/autotest_common.sh@936 -- # '[' -z 1340155 ']' 00:18:02.287 16:08:32 -- common/autotest_common.sh@940 -- # kill -0 1340155 00:18:02.287 16:08:32 -- common/autotest_common.sh@941 -- # uname 00:18:02.287 16:08:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:02.287 16:08:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1340155 00:18:02.287 16:08:33 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:02.287 16:08:33 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:02.287 16:08:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1340155' 00:18:02.287 killing process with pid 1340155 00:18:02.287 16:08:33 -- common/autotest_common.sh@955 -- # kill 1340155 00:18:02.287 16:08:33 -- common/autotest_common.sh@960 -- # wait 1340155 00:18:02.547 [2024-11-20 16:08:33.293209] app.c: 605:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:18:02.547 16:08:33 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:02.547 16:08:33 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:02.547 00:18:02.547 real 0m5.076s 00:18:02.547 user 0m22.838s 00:18:02.547 sys 0m1.027s 00:18:02.547 16:08:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:02.547 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:18:02.547 ************************************ 00:18:02.547 END TEST nvmf_host_management 00:18:02.547 ************************************ 00:18:02.806 16:08:33 -- target/host_management.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:18:02.806 00:18:02.806 real 0m11.302s 00:18:02.806 user 0m24.409s 00:18:02.806 sys 0m5.641s 00:18:02.806 16:08:33 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:02.806 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:18:02.806 ************************************ 00:18:02.806 END TEST nvmf_host_management 00:18:02.806 ************************************ 00:18:02.807 16:08:33 -- nvmf/nvmf.sh@47 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:18:02.807 16:08:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:02.807 16:08:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:02.807 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:18:02.807 ************************************ 00:18:02.807 START TEST nvmf_lvol 00:18:02.807 ************************************ 00:18:02.807 16:08:33 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=rdma 00:18:02.807 * Looking for test storage... 00:18:02.807 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:02.807 16:08:33 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:02.807 16:08:33 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:02.807 16:08:33 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:02.807 16:08:33 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:02.807 16:08:33 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:02.807 16:08:33 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:02.807 16:08:33 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:02.807 16:08:33 -- scripts/common.sh@335 -- # IFS=.-: 00:18:02.807 16:08:33 -- scripts/common.sh@335 -- # read -ra ver1 00:18:02.807 16:08:33 -- scripts/common.sh@336 -- # IFS=.-: 00:18:02.807 16:08:33 -- scripts/common.sh@336 -- # read -ra ver2 00:18:02.807 16:08:33 -- scripts/common.sh@337 -- # local 'op=<' 00:18:02.807 16:08:33 -- scripts/common.sh@339 -- # ver1_l=2 00:18:02.807 16:08:33 -- scripts/common.sh@340 -- # ver2_l=1 00:18:02.807 16:08:33 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:02.807 16:08:33 -- scripts/common.sh@343 -- # case "$op" in 00:18:02.807 16:08:33 -- scripts/common.sh@344 -- # : 1 00:18:02.807 16:08:33 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:02.807 16:08:33 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:02.807 16:08:33 -- scripts/common.sh@364 -- # decimal 1 00:18:02.807 16:08:33 -- scripts/common.sh@352 -- # local d=1 00:18:02.807 16:08:33 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:02.807 16:08:33 -- scripts/common.sh@354 -- # echo 1 00:18:02.807 16:08:33 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:02.807 16:08:33 -- scripts/common.sh@365 -- # decimal 2 00:18:02.807 16:08:33 -- scripts/common.sh@352 -- # local d=2 00:18:02.807 16:08:33 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:02.807 16:08:33 -- scripts/common.sh@354 -- # echo 2 00:18:02.807 16:08:33 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:02.807 16:08:33 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:02.807 16:08:33 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:02.807 16:08:33 -- scripts/common.sh@367 -- # return 0 00:18:02.807 16:08:33 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:02.807 16:08:33 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:02.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.807 --rc genhtml_branch_coverage=1 00:18:02.807 --rc genhtml_function_coverage=1 00:18:02.807 --rc genhtml_legend=1 00:18:02.807 --rc geninfo_all_blocks=1 00:18:02.807 --rc geninfo_unexecuted_blocks=1 00:18:02.807 00:18:02.807 ' 00:18:02.807 16:08:33 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:02.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.807 --rc genhtml_branch_coverage=1 00:18:02.807 --rc genhtml_function_coverage=1 00:18:02.807 --rc genhtml_legend=1 00:18:02.807 --rc geninfo_all_blocks=1 00:18:02.807 --rc geninfo_unexecuted_blocks=1 00:18:02.807 00:18:02.807 ' 00:18:02.807 16:08:33 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:02.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.807 --rc genhtml_branch_coverage=1 00:18:02.807 --rc genhtml_function_coverage=1 00:18:02.807 --rc genhtml_legend=1 00:18:02.807 --rc geninfo_all_blocks=1 00:18:02.807 --rc geninfo_unexecuted_blocks=1 00:18:02.807 00:18:02.807 ' 00:18:02.807 16:08:33 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:02.807 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:02.807 --rc genhtml_branch_coverage=1 00:18:02.807 --rc genhtml_function_coverage=1 00:18:02.807 --rc genhtml_legend=1 00:18:02.807 --rc geninfo_all_blocks=1 00:18:02.807 --rc geninfo_unexecuted_blocks=1 00:18:02.807 00:18:02.807 ' 00:18:02.807 16:08:33 -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:02.807 16:08:33 -- nvmf/common.sh@7 -- # uname -s 00:18:03.067 16:08:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:03.067 16:08:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:03.067 16:08:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:03.067 16:08:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:03.067 16:08:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:03.067 16:08:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:03.067 16:08:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:03.067 16:08:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:03.067 16:08:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:03.067 16:08:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:03.067 16:08:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:03.067 16:08:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:03.067 16:08:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:03.067 16:08:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:03.067 16:08:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:03.067 16:08:33 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:03.067 16:08:33 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:03.067 16:08:33 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:03.067 16:08:33 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:03.067 16:08:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.067 16:08:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.068 16:08:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.068 16:08:33 -- paths/export.sh@5 -- # export PATH 00:18:03.068 16:08:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:03.068 16:08:33 -- nvmf/common.sh@46 -- # : 0 00:18:03.068 16:08:33 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:03.068 16:08:33 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:03.068 16:08:33 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:03.068 16:08:33 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:03.068 16:08:33 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:03.068 16:08:33 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:03.068 16:08:33 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:03.068 16:08:33 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:03.068 16:08:33 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:03.068 16:08:33 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:03.068 16:08:33 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:18:03.068 16:08:33 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:18:03.068 16:08:33 -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:03.068 16:08:33 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:18:03.068 16:08:33 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:03.068 16:08:33 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:03.068 16:08:33 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:03.068 16:08:33 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:03.068 16:08:33 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:03.068 16:08:33 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:03.068 16:08:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:03.068 16:08:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:03.068 16:08:33 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:03.068 16:08:33 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:03.068 16:08:33 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:03.068 16:08:33 -- common/autotest_common.sh@10 -- # set +x 00:18:09.643 16:08:39 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:09.643 16:08:39 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:09.643 16:08:39 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:09.643 16:08:39 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:09.643 16:08:39 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:09.643 16:08:39 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:09.643 16:08:39 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:09.643 16:08:39 -- nvmf/common.sh@294 -- # net_devs=() 00:18:09.643 16:08:39 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:09.643 16:08:39 -- nvmf/common.sh@295 -- # e810=() 00:18:09.643 16:08:39 -- nvmf/common.sh@295 -- # local -ga e810 00:18:09.643 16:08:39 -- nvmf/common.sh@296 -- # x722=() 00:18:09.643 16:08:39 -- nvmf/common.sh@296 -- # local -ga x722 00:18:09.643 16:08:39 -- nvmf/common.sh@297 -- # mlx=() 00:18:09.643 16:08:39 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:09.643 16:08:39 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:09.643 16:08:39 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:09.643 16:08:39 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:09.643 16:08:39 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:09.643 16:08:39 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:09.643 16:08:39 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:09.643 16:08:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:09.643 16:08:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:09.643 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:09.643 16:08:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:09.643 16:08:39 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:09.643 16:08:39 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:09.643 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:09.643 16:08:39 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:09.643 16:08:39 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:09.643 16:08:39 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:09.643 16:08:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.643 16:08:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:09.643 16:08:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.643 16:08:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:09.643 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:09.643 16:08:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.643 16:08:39 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:09.643 16:08:39 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:09.643 16:08:39 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:09.643 16:08:39 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:09.643 16:08:39 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:09.643 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:09.643 16:08:39 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:09.643 16:08:39 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:09.643 16:08:39 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:09.643 16:08:39 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:09.643 16:08:39 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:09.643 16:08:39 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:09.643 16:08:39 -- nvmf/common.sh@57 -- # uname 00:18:09.643 16:08:39 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:09.643 16:08:39 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:09.643 16:08:39 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:09.643 16:08:39 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:09.643 16:08:39 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:09.643 16:08:39 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:09.643 16:08:39 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:09.643 16:08:39 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:09.643 16:08:39 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:09.643 16:08:39 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:09.643 16:08:39 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:09.643 16:08:39 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:09.643 16:08:39 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:09.643 16:08:39 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:09.643 16:08:39 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:09.644 16:08:39 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:09.644 16:08:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:09.644 16:08:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.644 16:08:39 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:09.644 16:08:39 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:09.644 16:08:39 -- nvmf/common.sh@104 -- # continue 2 00:18:09.644 16:08:39 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:09.644 16:08:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.644 16:08:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:09.644 16:08:39 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.644 16:08:39 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:09.644 16:08:39 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:09.644 16:08:39 -- nvmf/common.sh@104 -- # continue 2 00:18:09.644 16:08:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:09.644 16:08:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:09.644 16:08:39 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:09.644 16:08:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:09.644 16:08:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:09.644 16:08:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:09.644 16:08:39 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:09.644 16:08:39 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:09.644 16:08:39 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:09.644 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:09.644 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:09.644 altname enp217s0f0np0 00:18:09.644 altname ens818f0np0 00:18:09.644 inet 192.168.100.8/24 scope global mlx_0_0 00:18:09.644 valid_lft forever preferred_lft forever 00:18:09.644 16:08:39 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:09.644 16:08:39 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:09.644 16:08:39 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:09.644 16:08:39 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:09.644 16:08:39 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:09.644 16:08:39 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:09.644 16:08:40 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:09.644 16:08:40 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:09.644 16:08:40 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:09.644 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:09.644 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:09.644 altname enp217s0f1np1 00:18:09.644 altname ens818f1np1 00:18:09.644 inet 192.168.100.9/24 scope global mlx_0_1 00:18:09.644 valid_lft forever preferred_lft forever 00:18:09.644 16:08:40 -- nvmf/common.sh@410 -- # return 0 00:18:09.644 16:08:40 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:09.644 16:08:40 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:09.644 16:08:40 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:09.644 16:08:40 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:09.644 16:08:40 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:09.644 16:08:40 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:09.644 16:08:40 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:09.644 16:08:40 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:09.644 16:08:40 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:09.644 16:08:40 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:09.644 16:08:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:09.644 16:08:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.644 16:08:40 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:09.644 16:08:40 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:09.644 16:08:40 -- nvmf/common.sh@104 -- # continue 2 00:18:09.644 16:08:40 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:09.644 16:08:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.644 16:08:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:09.644 16:08:40 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:09.644 16:08:40 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:09.644 16:08:40 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:09.644 16:08:40 -- nvmf/common.sh@104 -- # continue 2 00:18:09.644 16:08:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:09.644 16:08:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:09.644 16:08:40 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:09.644 16:08:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:09.644 16:08:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:09.644 16:08:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:09.644 16:08:40 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:09.644 16:08:40 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:09.644 16:08:40 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:09.644 16:08:40 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:09.644 16:08:40 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:09.644 16:08:40 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:09.644 16:08:40 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:09.644 192.168.100.9' 00:18:09.644 16:08:40 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:09.644 192.168.100.9' 00:18:09.644 16:08:40 -- nvmf/common.sh@445 -- # head -n 1 00:18:09.644 16:08:40 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:09.644 16:08:40 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:09.644 192.168.100.9' 00:18:09.644 16:08:40 -- nvmf/common.sh@446 -- # tail -n +2 00:18:09.644 16:08:40 -- nvmf/common.sh@446 -- # head -n 1 00:18:09.644 16:08:40 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:09.644 16:08:40 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:09.644 16:08:40 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:09.644 16:08:40 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:09.644 16:08:40 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:09.644 16:08:40 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:09.644 16:08:40 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:18:09.644 16:08:40 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:09.644 16:08:40 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:09.644 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 16:08:40 -- nvmf/common.sh@469 -- # nvmfpid=1344261 00:18:09.644 16:08:40 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:18:09.644 16:08:40 -- nvmf/common.sh@470 -- # waitforlisten 1344261 00:18:09.644 16:08:40 -- common/autotest_common.sh@829 -- # '[' -z 1344261 ']' 00:18:09.644 16:08:40 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.644 16:08:40 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:09.644 16:08:40 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.644 16:08:40 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:09.644 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:18:09.644 [2024-11-20 16:08:40.175650] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:09.644 [2024-11-20 16:08:40.175698] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.644 EAL: No free 2048 kB hugepages reported on node 1 00:18:09.644 [2024-11-20 16:08:40.246349] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:09.644 [2024-11-20 16:08:40.283146] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:09.644 [2024-11-20 16:08:40.283277] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:09.644 [2024-11-20 16:08:40.283288] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:09.644 [2024-11-20 16:08:40.283314] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:09.644 [2024-11-20 16:08:40.283367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:09.644 [2024-11-20 16:08:40.283463] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:09.644 [2024-11-20 16:08:40.283465] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.213 16:08:40 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:10.213 16:08:40 -- common/autotest_common.sh@862 -- # return 0 00:18:10.213 16:08:40 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:10.213 16:08:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:10.213 16:08:40 -- common/autotest_common.sh@10 -- # set +x 00:18:10.472 16:08:41 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:10.472 16:08:41 -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:10.472 [2024-11-20 16:08:41.204094] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x17fc600/0x1800ab0) succeed. 00:18:10.472 [2024-11-20 16:08:41.213222] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x17fdb00/0x1842150) succeed. 00:18:10.731 16:08:41 -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.731 16:08:41 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:18:10.731 16:08:41 -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:10.990 16:08:41 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:18:10.990 16:08:41 -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:18:11.248 16:08:41 -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:18:11.507 16:08:42 -- target/nvmf_lvol.sh@29 -- # lvs=681892c0-c53f-45a8-98fa-413cc2e92338 00:18:11.507 16:08:42 -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 681892c0-c53f-45a8-98fa-413cc2e92338 lvol 20 00:18:11.507 16:08:42 -- target/nvmf_lvol.sh@32 -- # lvol=cb21e3ae-61ff-4381-8c6c-c5605fd2682e 00:18:11.507 16:08:42 -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:11.766 16:08:42 -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cb21e3ae-61ff-4381-8c6c-c5605fd2682e 00:18:12.026 16:08:42 -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:12.286 [2024-11-20 16:08:42.851046] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:12.286 16:08:42 -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:12.286 16:08:43 -- target/nvmf_lvol.sh@42 -- # perf_pid=1344804 00:18:12.286 16:08:43 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:18:12.286 16:08:43 -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:18:12.545 EAL: No free 2048 kB hugepages reported on node 1 00:18:13.482 16:08:44 -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cb21e3ae-61ff-4381-8c6c-c5605fd2682e MY_SNAPSHOT 00:18:13.482 16:08:44 -- target/nvmf_lvol.sh@47 -- # snapshot=04cd0fb2-9c18-4559-ae17-ef7caddd19df 00:18:13.482 16:08:44 -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cb21e3ae-61ff-4381-8c6c-c5605fd2682e 30 00:18:13.741 16:08:44 -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 04cd0fb2-9c18-4559-ae17-ef7caddd19df MY_CLONE 00:18:14.000 16:08:44 -- target/nvmf_lvol.sh@49 -- # clone=0247e18b-761d-47b7-a700-ba4a3abd9742 00:18:14.000 16:08:44 -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0247e18b-761d-47b7-a700-ba4a3abd9742 00:18:14.259 16:08:44 -- target/nvmf_lvol.sh@53 -- # wait 1344804 00:18:24.252 Initializing NVMe Controllers 00:18:24.252 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:18:24.252 Controller IO queue size 128, less than required. 00:18:24.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:18:24.252 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:18:24.252 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:18:24.252 Initialization complete. Launching workers. 00:18:24.252 ======================================================== 00:18:24.252 Latency(us) 00:18:24.252 Device Information : IOPS MiB/s Average min max 00:18:24.252 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 17193.30 67.16 7446.47 2298.25 43436.49 00:18:24.252 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 17072.40 66.69 7499.14 3354.23 45248.34 00:18:24.252 ======================================================== 00:18:24.252 Total : 34265.70 133.85 7472.72 2298.25 45248.34 00:18:24.252 00:18:24.252 16:08:54 -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:24.252 16:08:54 -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cb21e3ae-61ff-4381-8c6c-c5605fd2682e 00:18:24.252 16:08:54 -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 681892c0-c53f-45a8-98fa-413cc2e92338 00:18:24.252 16:08:55 -- target/nvmf_lvol.sh@60 -- # rm -f 00:18:24.252 16:08:55 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:18:24.252 16:08:55 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:18:24.252 16:08:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:18:24.252 16:08:55 -- nvmf/common.sh@116 -- # sync 00:18:24.252 16:08:55 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:18:24.252 16:08:55 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:18:24.252 16:08:55 -- nvmf/common.sh@119 -- # set +e 00:18:24.252 16:08:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:18:24.252 16:08:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:18:24.252 rmmod nvme_rdma 00:18:24.252 rmmod nvme_fabrics 00:18:24.511 16:08:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:18:24.511 16:08:55 -- nvmf/common.sh@123 -- # set -e 00:18:24.511 16:08:55 -- nvmf/common.sh@124 -- # return 0 00:18:24.511 16:08:55 -- nvmf/common.sh@477 -- # '[' -n 1344261 ']' 00:18:24.511 16:08:55 -- nvmf/common.sh@478 -- # killprocess 1344261 00:18:24.511 16:08:55 -- common/autotest_common.sh@936 -- # '[' -z 1344261 ']' 00:18:24.511 16:08:55 -- common/autotest_common.sh@940 -- # kill -0 1344261 00:18:24.511 16:08:55 -- common/autotest_common.sh@941 -- # uname 00:18:24.511 16:08:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:24.511 16:08:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1344261 00:18:24.511 16:08:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:24.511 16:08:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:24.511 16:08:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1344261' 00:18:24.511 killing process with pid 1344261 00:18:24.511 16:08:55 -- common/autotest_common.sh@955 -- # kill 1344261 00:18:24.511 16:08:55 -- common/autotest_common.sh@960 -- # wait 1344261 00:18:24.771 16:08:55 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:18:24.771 16:08:55 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:18:24.771 00:18:24.771 real 0m21.996s 00:18:24.771 user 1m11.873s 00:18:24.771 sys 0m6.163s 00:18:24.771 16:08:55 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:24.771 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:18:24.771 ************************************ 00:18:24.771 END TEST nvmf_lvol 00:18:24.771 ************************************ 00:18:24.771 16:08:55 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:24.771 16:08:55 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:24.771 16:08:55 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:24.771 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:18:24.771 ************************************ 00:18:24.771 START TEST nvmf_lvs_grow 00:18:24.771 ************************************ 00:18:24.771 16:08:55 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=rdma 00:18:24.771 * Looking for test storage... 00:18:24.771 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:18:24.771 16:08:55 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:18:24.771 16:08:55 -- common/autotest_common.sh@1690 -- # lcov --version 00:18:24.771 16:08:55 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:18:25.030 16:08:55 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:18:25.030 16:08:55 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:18:25.030 16:08:55 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:18:25.030 16:08:55 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:18:25.030 16:08:55 -- scripts/common.sh@335 -- # IFS=.-: 00:18:25.030 16:08:55 -- scripts/common.sh@335 -- # read -ra ver1 00:18:25.030 16:08:55 -- scripts/common.sh@336 -- # IFS=.-: 00:18:25.030 16:08:55 -- scripts/common.sh@336 -- # read -ra ver2 00:18:25.030 16:08:55 -- scripts/common.sh@337 -- # local 'op=<' 00:18:25.030 16:08:55 -- scripts/common.sh@339 -- # ver1_l=2 00:18:25.030 16:08:55 -- scripts/common.sh@340 -- # ver2_l=1 00:18:25.030 16:08:55 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:18:25.030 16:08:55 -- scripts/common.sh@343 -- # case "$op" in 00:18:25.030 16:08:55 -- scripts/common.sh@344 -- # : 1 00:18:25.030 16:08:55 -- scripts/common.sh@363 -- # (( v = 0 )) 00:18:25.030 16:08:55 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:25.030 16:08:55 -- scripts/common.sh@364 -- # decimal 1 00:18:25.030 16:08:55 -- scripts/common.sh@352 -- # local d=1 00:18:25.030 16:08:55 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:25.030 16:08:55 -- scripts/common.sh@354 -- # echo 1 00:18:25.030 16:08:55 -- scripts/common.sh@364 -- # ver1[v]=1 00:18:25.030 16:08:55 -- scripts/common.sh@365 -- # decimal 2 00:18:25.030 16:08:55 -- scripts/common.sh@352 -- # local d=2 00:18:25.030 16:08:55 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:25.030 16:08:55 -- scripts/common.sh@354 -- # echo 2 00:18:25.030 16:08:55 -- scripts/common.sh@365 -- # ver2[v]=2 00:18:25.030 16:08:55 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:18:25.030 16:08:55 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:18:25.030 16:08:55 -- scripts/common.sh@367 -- # return 0 00:18:25.030 16:08:55 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:25.030 16:08:55 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:18:25.030 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.031 --rc genhtml_branch_coverage=1 00:18:25.031 --rc genhtml_function_coverage=1 00:18:25.031 --rc genhtml_legend=1 00:18:25.031 --rc geninfo_all_blocks=1 00:18:25.031 --rc geninfo_unexecuted_blocks=1 00:18:25.031 00:18:25.031 ' 00:18:25.031 16:08:55 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:18:25.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.031 --rc genhtml_branch_coverage=1 00:18:25.031 --rc genhtml_function_coverage=1 00:18:25.031 --rc genhtml_legend=1 00:18:25.031 --rc geninfo_all_blocks=1 00:18:25.031 --rc geninfo_unexecuted_blocks=1 00:18:25.031 00:18:25.031 ' 00:18:25.031 16:08:55 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:18:25.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.031 --rc genhtml_branch_coverage=1 00:18:25.031 --rc genhtml_function_coverage=1 00:18:25.031 --rc genhtml_legend=1 00:18:25.031 --rc geninfo_all_blocks=1 00:18:25.031 --rc geninfo_unexecuted_blocks=1 00:18:25.031 00:18:25.031 ' 00:18:25.031 16:08:55 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:18:25.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:25.031 --rc genhtml_branch_coverage=1 00:18:25.031 --rc genhtml_function_coverage=1 00:18:25.031 --rc genhtml_legend=1 00:18:25.031 --rc geninfo_all_blocks=1 00:18:25.031 --rc geninfo_unexecuted_blocks=1 00:18:25.031 00:18:25.031 ' 00:18:25.031 16:08:55 -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:18:25.031 16:08:55 -- nvmf/common.sh@7 -- # uname -s 00:18:25.031 16:08:55 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:25.031 16:08:55 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:25.031 16:08:55 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:25.031 16:08:55 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:25.031 16:08:55 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:25.031 16:08:55 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:25.031 16:08:55 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:25.031 16:08:55 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:25.031 16:08:55 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:25.031 16:08:55 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:25.031 16:08:55 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:18:25.031 16:08:55 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:18:25.031 16:08:55 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:25.031 16:08:55 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:25.031 16:08:55 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:25.031 16:08:55 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:18:25.031 16:08:55 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:25.031 16:08:55 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:25.031 16:08:55 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:25.031 16:08:55 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.031 16:08:55 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.031 16:08:55 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.031 16:08:55 -- paths/export.sh@5 -- # export PATH 00:18:25.031 16:08:55 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:25.031 16:08:55 -- nvmf/common.sh@46 -- # : 0 00:18:25.031 16:08:55 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:18:25.031 16:08:55 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:18:25.031 16:08:55 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:18:25.031 16:08:55 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:25.031 16:08:55 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:25.031 16:08:55 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:18:25.031 16:08:55 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:18:25.031 16:08:55 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:18:25.031 16:08:55 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:25.031 16:08:55 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:25.031 16:08:55 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:18:25.031 16:08:55 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:18:25.031 16:08:55 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:25.031 16:08:55 -- nvmf/common.sh@436 -- # prepare_net_devs 00:18:25.031 16:08:55 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:18:25.031 16:08:55 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:18:25.031 16:08:55 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.031 16:08:55 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:25.031 16:08:55 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:25.031 16:08:55 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:18:25.031 16:08:55 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:18:25.031 16:08:55 -- nvmf/common.sh@284 -- # xtrace_disable 00:18:25.031 16:08:55 -- common/autotest_common.sh@10 -- # set +x 00:18:31.609 16:09:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:18:31.609 16:09:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:18:31.609 16:09:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:18:31.609 16:09:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:18:31.609 16:09:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:18:31.609 16:09:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:18:31.609 16:09:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:18:31.609 16:09:02 -- nvmf/common.sh@294 -- # net_devs=() 00:18:31.609 16:09:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:18:31.609 16:09:02 -- nvmf/common.sh@295 -- # e810=() 00:18:31.609 16:09:02 -- nvmf/common.sh@295 -- # local -ga e810 00:18:31.609 16:09:02 -- nvmf/common.sh@296 -- # x722=() 00:18:31.609 16:09:02 -- nvmf/common.sh@296 -- # local -ga x722 00:18:31.609 16:09:02 -- nvmf/common.sh@297 -- # mlx=() 00:18:31.609 16:09:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:18:31.609 16:09:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:31.609 16:09:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:31.610 16:09:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:18:31.610 16:09:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:18:31.610 16:09:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:18:31.610 16:09:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:18:31.610 16:09:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:18:31.610 16:09:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.610 16:09:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:18:31.610 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:18:31.610 16:09:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:31.610 16:09:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:18:31.610 16:09:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:18:31.610 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:18:31.610 16:09:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:18:31.610 16:09:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:18:31.610 16:09:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.610 16:09:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.610 16:09:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.610 16:09:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.610 16:09:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:18:31.610 Found net devices under 0000:d9:00.0: mlx_0_0 00:18:31.610 16:09:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.610 16:09:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:18:31.610 16:09:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:31.610 16:09:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:18:31.610 16:09:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:31.610 16:09:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:18:31.610 Found net devices under 0000:d9:00.1: mlx_0_1 00:18:31.610 16:09:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:18:31.610 16:09:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:18:31.610 16:09:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:18:31.610 16:09:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:18:31.610 16:09:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:18:31.610 16:09:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:18:31.610 16:09:02 -- nvmf/common.sh@57 -- # uname 00:18:31.610 16:09:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:18:31.610 16:09:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:18:31.610 16:09:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:18:31.610 16:09:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:18:31.610 16:09:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:18:31.610 16:09:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:18:31.610 16:09:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:18:31.610 16:09:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:18:31.870 16:09:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:18:31.870 16:09:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:18:31.870 16:09:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:18:31.870 16:09:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:31.870 16:09:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:31.870 16:09:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:31.870 16:09:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:31.870 16:09:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:31.870 16:09:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:31.870 16:09:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.870 16:09:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:31.870 16:09:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:31.871 16:09:02 -- nvmf/common.sh@104 -- # continue 2 00:18:31.871 16:09:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:31.871 16:09:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.871 16:09:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:31.871 16:09:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.871 16:09:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:31.871 16:09:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:31.871 16:09:02 -- nvmf/common.sh@104 -- # continue 2 00:18:31.871 16:09:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:31.871 16:09:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:18:31.871 16:09:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:31.871 16:09:02 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:18:31.871 16:09:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:18:31.871 16:09:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:18:31.871 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:31.871 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:18:31.871 altname enp217s0f0np0 00:18:31.871 altname ens818f0np0 00:18:31.871 inet 192.168.100.8/24 scope global mlx_0_0 00:18:31.871 valid_lft forever preferred_lft forever 00:18:31.871 16:09:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:18:31.871 16:09:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:18:31.871 16:09:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:31.871 16:09:02 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:18:31.871 16:09:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:18:31.871 16:09:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:18:31.871 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:18:31.871 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:18:31.871 altname enp217s0f1np1 00:18:31.871 altname ens818f1np1 00:18:31.871 inet 192.168.100.9/24 scope global mlx_0_1 00:18:31.871 valid_lft forever preferred_lft forever 00:18:31.871 16:09:02 -- nvmf/common.sh@410 -- # return 0 00:18:31.871 16:09:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:18:31.871 16:09:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:18:31.871 16:09:02 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:18:31.871 16:09:02 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:18:31.871 16:09:02 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:18:31.871 16:09:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:18:31.871 16:09:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:18:31.871 16:09:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:18:31.871 16:09:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:18:31.871 16:09:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:18:31.871 16:09:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:31.871 16:09:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.871 16:09:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:18:31.871 16:09:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:18:31.871 16:09:02 -- nvmf/common.sh@104 -- # continue 2 00:18:31.871 16:09:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:18:31.871 16:09:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.871 16:09:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:18:31.871 16:09:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:18:31.871 16:09:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:18:31.871 16:09:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:18:31.871 16:09:02 -- nvmf/common.sh@104 -- # continue 2 00:18:31.871 16:09:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:31.871 16:09:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:18:31.871 16:09:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:31.871 16:09:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:18:31.871 16:09:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:18:31.871 16:09:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:18:31.871 16:09:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:18:31.871 16:09:02 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:18:31.871 192.168.100.9' 00:18:31.871 16:09:02 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:18:31.871 192.168.100.9' 00:18:31.871 16:09:02 -- nvmf/common.sh@445 -- # head -n 1 00:18:31.871 16:09:02 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:18:31.871 16:09:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:18:31.871 192.168.100.9' 00:18:31.871 16:09:02 -- nvmf/common.sh@446 -- # tail -n +2 00:18:31.871 16:09:02 -- nvmf/common.sh@446 -- # head -n 1 00:18:31.871 16:09:02 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:18:31.871 16:09:02 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:18:31.871 16:09:02 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:18:31.871 16:09:02 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:18:31.871 16:09:02 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:18:31.871 16:09:02 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:18:31.871 16:09:02 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:18:31.871 16:09:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:18:31.871 16:09:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:31.871 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:18:31.871 16:09:02 -- nvmf/common.sh@469 -- # nvmfpid=1350524 00:18:31.871 16:09:02 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:18:31.871 16:09:02 -- nvmf/common.sh@470 -- # waitforlisten 1350524 00:18:31.871 16:09:02 -- common/autotest_common.sh@829 -- # '[' -z 1350524 ']' 00:18:31.871 16:09:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.871 16:09:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.871 16:09:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.871 16:09:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.871 16:09:02 -- common/autotest_common.sh@10 -- # set +x 00:18:31.871 [2024-11-20 16:09:02.653338] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:31.871 [2024-11-20 16:09:02.653389] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:32.131 EAL: No free 2048 kB hugepages reported on node 1 00:18:32.131 [2024-11-20 16:09:02.723909] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.131 [2024-11-20 16:09:02.760335] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:18:32.131 [2024-11-20 16:09:02.760442] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:32.131 [2024-11-20 16:09:02.760453] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:32.131 [2024-11-20 16:09:02.760462] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:32.131 [2024-11-20 16:09:02.760488] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.700 16:09:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.700 16:09:03 -- common/autotest_common.sh@862 -- # return 0 00:18:32.700 16:09:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:18:32.700 16:09:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:32.700 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:18:32.960 16:09:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:32.960 16:09:03 -- target/nvmf_lvs_grow.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:18:32.960 [2024-11-20 16:09:03.702575] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xad3240/0xad76f0) succeed. 00:18:32.960 [2024-11-20 16:09:03.711764] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xad46f0/0xb18d90) succeed. 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:18:33.219 16:09:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:33.219 16:09:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:33.219 16:09:03 -- common/autotest_common.sh@10 -- # set +x 00:18:33.219 ************************************ 00:18:33.219 START TEST lvs_grow_clean 00:18:33.219 ************************************ 00:18:33.219 16:09:03 -- common/autotest_common.sh@1114 -- # lvs_grow 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:33.219 16:09:03 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:33.219 16:09:04 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:33.219 16:09:04 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:33.478 16:09:04 -- target/nvmf_lvs_grow.sh@28 -- # lvs=27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:33.478 16:09:04 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:33.478 16:09:04 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:33.738 16:09:04 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:33.738 16:09:04 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:33.738 16:09:04 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 27221759-a671-4b45-96ee-78a58dd8ae4f lvol 150 00:18:33.738 16:09:04 -- target/nvmf_lvs_grow.sh@33 -- # lvol=a9cd63ca-31d6-4492-abf9-4263e1d877ae 00:18:33.738 16:09:04 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:34.049 16:09:04 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:34.050 [2024-11-20 16:09:04.693927] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:34.050 [2024-11-20 16:09:04.693974] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:34.050 true 00:18:34.050 16:09:04 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:34.050 16:09:04 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:34.379 16:09:04 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:34.379 16:09:04 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:34.379 16:09:05 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a9cd63ca-31d6-4492-abf9-4263e1d877ae 00:18:34.638 16:09:05 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:34.638 [2024-11-20 16:09:05.396265] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:18:34.638 16:09:05 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:34.897 16:09:05 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1351042 00:18:34.897 16:09:05 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:34.897 16:09:05 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:34.897 16:09:05 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1351042 /var/tmp/bdevperf.sock 00:18:34.897 16:09:05 -- common/autotest_common.sh@829 -- # '[' -z 1351042 ']' 00:18:34.897 16:09:05 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:34.897 16:09:05 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:34.897 16:09:05 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:34.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:34.897 16:09:05 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:34.897 16:09:05 -- common/autotest_common.sh@10 -- # set +x 00:18:34.897 [2024-11-20 16:09:05.616659] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:34.897 [2024-11-20 16:09:05.616710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1351042 ] 00:18:34.897 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.897 [2024-11-20 16:09:05.686639] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.157 [2024-11-20 16:09:05.724498] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.724 16:09:06 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:35.724 16:09:06 -- common/autotest_common.sh@862 -- # return 0 00:18:35.725 16:09:06 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:35.983 Nvme0n1 00:18:35.983 16:09:06 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:36.243 [ 00:18:36.243 { 00:18:36.243 "name": "Nvme0n1", 00:18:36.243 "aliases": [ 00:18:36.243 "a9cd63ca-31d6-4492-abf9-4263e1d877ae" 00:18:36.243 ], 00:18:36.243 "product_name": "NVMe disk", 00:18:36.243 "block_size": 4096, 00:18:36.243 "num_blocks": 38912, 00:18:36.243 "uuid": "a9cd63ca-31d6-4492-abf9-4263e1d877ae", 00:18:36.243 "assigned_rate_limits": { 00:18:36.243 "rw_ios_per_sec": 0, 00:18:36.243 "rw_mbytes_per_sec": 0, 00:18:36.243 "r_mbytes_per_sec": 0, 00:18:36.243 "w_mbytes_per_sec": 0 00:18:36.243 }, 00:18:36.243 "claimed": false, 00:18:36.243 "zoned": false, 00:18:36.243 "supported_io_types": { 00:18:36.243 "read": true, 00:18:36.243 "write": true, 00:18:36.243 "unmap": true, 00:18:36.243 "write_zeroes": true, 00:18:36.243 "flush": true, 00:18:36.243 "reset": true, 00:18:36.243 "compare": true, 00:18:36.243 "compare_and_write": true, 00:18:36.243 "abort": true, 00:18:36.243 "nvme_admin": true, 00:18:36.243 "nvme_io": true 00:18:36.243 }, 00:18:36.243 "memory_domains": [ 00:18:36.243 { 00:18:36.243 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:36.243 "dma_device_type": 0 00:18:36.243 } 00:18:36.243 ], 00:18:36.243 "driver_specific": { 00:18:36.243 "nvme": [ 00:18:36.243 { 00:18:36.243 "trid": { 00:18:36.243 "trtype": "RDMA", 00:18:36.243 "adrfam": "IPv4", 00:18:36.243 "traddr": "192.168.100.8", 00:18:36.243 "trsvcid": "4420", 00:18:36.243 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:36.243 }, 00:18:36.243 "ctrlr_data": { 00:18:36.243 "cntlid": 1, 00:18:36.243 "vendor_id": "0x8086", 00:18:36.243 "model_number": "SPDK bdev Controller", 00:18:36.243 "serial_number": "SPDK0", 00:18:36.243 "firmware_revision": "24.01.1", 00:18:36.243 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:36.243 "oacs": { 00:18:36.243 "security": 0, 00:18:36.243 "format": 0, 00:18:36.243 "firmware": 0, 00:18:36.243 "ns_manage": 0 00:18:36.243 }, 00:18:36.243 "multi_ctrlr": true, 00:18:36.243 "ana_reporting": false 00:18:36.243 }, 00:18:36.243 "vs": { 00:18:36.243 "nvme_version": "1.3" 00:18:36.243 }, 00:18:36.243 "ns_data": { 00:18:36.243 "id": 1, 00:18:36.243 "can_share": true 00:18:36.243 } 00:18:36.243 } 00:18:36.243 ], 00:18:36.243 "mp_policy": "active_passive" 00:18:36.243 } 00:18:36.243 } 00:18:36.243 ] 00:18:36.243 16:09:06 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1351178 00:18:36.243 16:09:06 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:36.243 16:09:06 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:36.243 Running I/O for 10 seconds... 00:18:37.180 Latency(us) 00:18:37.180 [2024-11-20T15:09:07.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.180 [2024-11-20T15:09:07.985Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:37.180 Nvme0n1 : 1.00 36581.00 142.89 0.00 0.00 0.00 0.00 0.00 00:18:37.180 [2024-11-20T15:09:07.985Z] =================================================================================================================== 00:18:37.180 [2024-11-20T15:09:07.985Z] Total : 36581.00 142.89 0.00 0.00 0.00 0.00 0.00 00:18:37.180 00:18:38.117 16:09:08 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:38.376 [2024-11-20T15:09:09.181Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:38.376 Nvme0n1 : 2.00 36657.50 143.19 0.00 0.00 0.00 0.00 0.00 00:18:38.376 [2024-11-20T15:09:09.181Z] =================================================================================================================== 00:18:38.376 [2024-11-20T15:09:09.181Z] Total : 36657.50 143.19 0.00 0.00 0.00 0.00 0.00 00:18:38.376 00:18:38.376 true 00:18:38.376 16:09:09 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:38.376 16:09:09 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:38.635 16:09:09 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:38.635 16:09:09 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:38.635 16:09:09 -- target/nvmf_lvs_grow.sh@65 -- # wait 1351178 00:18:39.204 [2024-11-20T15:09:10.009Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:39.204 Nvme0n1 : 3.00 36875.33 144.04 0.00 0.00 0.00 0.00 0.00 00:18:39.204 [2024-11-20T15:09:10.009Z] =================================================================================================================== 00:18:39.204 [2024-11-20T15:09:10.009Z] Total : 36875.33 144.04 0.00 0.00 0.00 0.00 0.00 00:18:39.204 00:18:40.583 [2024-11-20T15:09:11.388Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:40.583 Nvme0n1 : 4.00 36896.50 144.13 0.00 0.00 0.00 0.00 0.00 00:18:40.583 [2024-11-20T15:09:11.388Z] =================================================================================================================== 00:18:40.583 [2024-11-20T15:09:11.388Z] Total : 36896.50 144.13 0.00 0.00 0.00 0.00 0.00 00:18:40.583 00:18:41.528 [2024-11-20T15:09:12.333Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:41.528 Nvme0n1 : 5.00 37069.00 144.80 0.00 0.00 0.00 0.00 0.00 00:18:41.528 [2024-11-20T15:09:12.333Z] =================================================================================================================== 00:18:41.528 [2024-11-20T15:09:12.333Z] Total : 37069.00 144.80 0.00 0.00 0.00 0.00 0.00 00:18:41.528 00:18:42.467 [2024-11-20T15:09:13.272Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:42.467 Nvme0n1 : 6.00 37200.17 145.31 0.00 0.00 0.00 0.00 0.00 00:18:42.467 [2024-11-20T15:09:13.272Z] =================================================================================================================== 00:18:42.467 [2024-11-20T15:09:13.272Z] Total : 37200.17 145.31 0.00 0.00 0.00 0.00 0.00 00:18:42.467 00:18:43.404 [2024-11-20T15:09:14.209Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:43.404 Nvme0n1 : 7.00 37257.00 145.54 0.00 0.00 0.00 0.00 0.00 00:18:43.404 [2024-11-20T15:09:14.209Z] =================================================================================================================== 00:18:43.404 [2024-11-20T15:09:14.209Z] Total : 37257.00 145.54 0.00 0.00 0.00 0.00 0.00 00:18:43.404 00:18:44.339 [2024-11-20T15:09:15.144Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:44.339 Nvme0n1 : 8.00 37324.00 145.80 0.00 0.00 0.00 0.00 0.00 00:18:44.339 [2024-11-20T15:09:15.144Z] =================================================================================================================== 00:18:44.339 [2024-11-20T15:09:15.144Z] Total : 37324.00 145.80 0.00 0.00 0.00 0.00 0.00 00:18:44.339 00:18:45.275 [2024-11-20T15:09:16.080Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:45.275 Nvme0n1 : 9.00 37380.00 146.02 0.00 0.00 0.00 0.00 0.00 00:18:45.275 [2024-11-20T15:09:16.081Z] =================================================================================================================== 00:18:45.276 [2024-11-20T15:09:16.081Z] Total : 37380.00 146.02 0.00 0.00 0.00 0.00 0.00 00:18:45.276 00:18:46.213 [2024-11-20T15:09:17.018Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.213 Nvme0n1 : 10.00 37420.60 146.17 0.00 0.00 0.00 0.00 0.00 00:18:46.213 [2024-11-20T15:09:17.018Z] =================================================================================================================== 00:18:46.213 [2024-11-20T15:09:17.018Z] Total : 37420.60 146.17 0.00 0.00 0.00 0.00 0.00 00:18:46.213 00:18:46.213 00:18:46.213 Latency(us) 00:18:46.213 [2024-11-20T15:09:17.018Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.213 [2024-11-20T15:09:17.018Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:46.213 Nvme0n1 : 10.00 37420.80 146.18 0.00 0.00 3418.38 2005.40 7969.18 00:18:46.213 [2024-11-20T15:09:17.018Z] =================================================================================================================== 00:18:46.213 [2024-11-20T15:09:17.018Z] Total : 37420.80 146.18 0.00 0.00 3418.38 2005.40 7969.18 00:18:46.213 0 00:18:46.213 16:09:16 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1351042 00:18:46.213 16:09:16 -- common/autotest_common.sh@936 -- # '[' -z 1351042 ']' 00:18:46.213 16:09:16 -- common/autotest_common.sh@940 -- # kill -0 1351042 00:18:46.213 16:09:16 -- common/autotest_common.sh@941 -- # uname 00:18:46.213 16:09:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:46.214 16:09:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1351042 00:18:46.473 16:09:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:18:46.473 16:09:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:18:46.473 16:09:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1351042' 00:18:46.473 killing process with pid 1351042 00:18:46.473 16:09:17 -- common/autotest_common.sh@955 -- # kill 1351042 00:18:46.473 Received shutdown signal, test time was about 10.000000 seconds 00:18:46.473 00:18:46.473 Latency(us) 00:18:46.473 [2024-11-20T15:09:17.278Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.473 [2024-11-20T15:09:17.278Z] =================================================================================================================== 00:18:46.473 [2024-11-20T15:09:17.278Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:46.473 16:09:17 -- common/autotest_common.sh@960 -- # wait 1351042 00:18:46.473 16:09:17 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:18:46.732 16:09:17 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:46.732 16:09:17 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:18:46.992 16:09:17 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:18:46.992 16:09:17 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:18:46.992 16:09:17 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:46.992 [2024-11-20 16:09:17.776341] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:18:47.251 16:09:17 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:47.251 16:09:17 -- common/autotest_common.sh@650 -- # local es=0 00:18:47.251 16:09:17 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:47.251 16:09:17 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:47.251 16:09:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.251 16:09:17 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:47.251 16:09:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.251 16:09:17 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:47.251 16:09:17 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:47.251 16:09:17 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:18:47.251 16:09:17 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:18:47.251 16:09:17 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:47.251 request: 00:18:47.251 { 00:18:47.251 "uuid": "27221759-a671-4b45-96ee-78a58dd8ae4f", 00:18:47.251 "method": "bdev_lvol_get_lvstores", 00:18:47.251 "req_id": 1 00:18:47.251 } 00:18:47.251 Got JSON-RPC error response 00:18:47.251 response: 00:18:47.251 { 00:18:47.252 "code": -19, 00:18:47.252 "message": "No such device" 00:18:47.252 } 00:18:47.252 16:09:18 -- common/autotest_common.sh@653 -- # es=1 00:18:47.252 16:09:18 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:47.252 16:09:18 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:47.252 16:09:18 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:47.252 16:09:18 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:47.511 aio_bdev 00:18:47.511 16:09:18 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev a9cd63ca-31d6-4492-abf9-4263e1d877ae 00:18:47.511 16:09:18 -- common/autotest_common.sh@897 -- # local bdev_name=a9cd63ca-31d6-4492-abf9-4263e1d877ae 00:18:47.511 16:09:18 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:18:47.511 16:09:18 -- common/autotest_common.sh@899 -- # local i 00:18:47.511 16:09:18 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:18:47.511 16:09:18 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:18:47.511 16:09:18 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:18:47.769 16:09:18 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b a9cd63ca-31d6-4492-abf9-4263e1d877ae -t 2000 00:18:47.769 [ 00:18:47.769 { 00:18:47.769 "name": "a9cd63ca-31d6-4492-abf9-4263e1d877ae", 00:18:47.769 "aliases": [ 00:18:47.769 "lvs/lvol" 00:18:47.769 ], 00:18:47.769 "product_name": "Logical Volume", 00:18:47.769 "block_size": 4096, 00:18:47.769 "num_blocks": 38912, 00:18:47.769 "uuid": "a9cd63ca-31d6-4492-abf9-4263e1d877ae", 00:18:47.769 "assigned_rate_limits": { 00:18:47.769 "rw_ios_per_sec": 0, 00:18:47.769 "rw_mbytes_per_sec": 0, 00:18:47.769 "r_mbytes_per_sec": 0, 00:18:47.769 "w_mbytes_per_sec": 0 00:18:47.769 }, 00:18:47.769 "claimed": false, 00:18:47.769 "zoned": false, 00:18:47.769 "supported_io_types": { 00:18:47.769 "read": true, 00:18:47.769 "write": true, 00:18:47.769 "unmap": true, 00:18:47.769 "write_zeroes": true, 00:18:47.769 "flush": false, 00:18:47.769 "reset": true, 00:18:47.769 "compare": false, 00:18:47.769 "compare_and_write": false, 00:18:47.769 "abort": false, 00:18:47.769 "nvme_admin": false, 00:18:47.769 "nvme_io": false 00:18:47.769 }, 00:18:47.769 "driver_specific": { 00:18:47.769 "lvol": { 00:18:47.769 "lvol_store_uuid": "27221759-a671-4b45-96ee-78a58dd8ae4f", 00:18:47.769 "base_bdev": "aio_bdev", 00:18:47.769 "thin_provision": false, 00:18:47.769 "snapshot": false, 00:18:47.769 "clone": false, 00:18:47.769 "esnap_clone": false 00:18:47.769 } 00:18:47.769 } 00:18:47.769 } 00:18:47.769 ] 00:18:47.769 16:09:18 -- common/autotest_common.sh@905 -- # return 0 00:18:47.769 16:09:18 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:47.769 16:09:18 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:18:48.029 16:09:18 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:18:48.029 16:09:18 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:48.029 16:09:18 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:18:48.289 16:09:18 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:18:48.289 16:09:18 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete a9cd63ca-31d6-4492-abf9-4263e1d877ae 00:18:48.289 16:09:19 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 27221759-a671-4b45-96ee-78a58dd8ae4f 00:18:48.547 16:09:19 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:48.807 00:18:48.807 real 0m15.652s 00:18:48.807 user 0m15.662s 00:18:48.807 sys 0m1.079s 00:18:48.807 16:09:19 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:18:48.807 16:09:19 -- common/autotest_common.sh@10 -- # set +x 00:18:48.807 ************************************ 00:18:48.807 END TEST lvs_grow_clean 00:18:48.807 ************************************ 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:18:48.807 16:09:19 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:48.807 16:09:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:48.807 16:09:19 -- common/autotest_common.sh@10 -- # set +x 00:18:48.807 ************************************ 00:18:48.807 START TEST lvs_grow_dirty 00:18:48.807 ************************************ 00:18:48.807 16:09:19 -- common/autotest_common.sh@1114 -- # lvs_grow dirty 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:48.807 16:09:19 -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:18:49.066 16:09:19 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:18:49.066 16:09:19 -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:18:49.327 16:09:19 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c22bd12c-7110-41be-8597-a8055e0ab431 00:18:49.327 16:09:19 -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:18:49.327 16:09:19 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:18:49.327 16:09:20 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:18:49.327 16:09:20 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:18:49.327 16:09:20 -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c22bd12c-7110-41be-8597-a8055e0ab431 lvol 150 00:18:49.586 16:09:20 -- target/nvmf_lvs_grow.sh@33 -- # lvol=603ad876-ba7f-4c1e-8d8e-eb519d7e5777 00:18:49.586 16:09:20 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:18:49.586 16:09:20 -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:18:49.844 [2024-11-20 16:09:20.413824] bdev_aio.c: 959:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:18:49.844 [2024-11-20 16:09:20.413873] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:18:49.844 true 00:18:49.844 16:09:20 -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:18:49.844 16:09:20 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:18:49.844 16:09:20 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:18:49.844 16:09:20 -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:18:50.104 16:09:20 -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 603ad876-ba7f-4c1e-8d8e-eb519d7e5777 00:18:50.364 16:09:20 -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:18:50.364 16:09:21 -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:18:50.624 16:09:21 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1354217 00:18:50.624 16:09:21 -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:18:50.624 16:09:21 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:50.624 16:09:21 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1354217 /var/tmp/bdevperf.sock 00:18:50.624 16:09:21 -- common/autotest_common.sh@829 -- # '[' -z 1354217 ']' 00:18:50.624 16:09:21 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.624 16:09:21 -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:50.624 16:09:21 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.624 16:09:21 -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:50.624 16:09:21 -- common/autotest_common.sh@10 -- # set +x 00:18:50.624 [2024-11-20 16:09:21.316884] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:18:50.624 [2024-11-20 16:09:21.316940] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1354217 ] 00:18:50.624 EAL: No free 2048 kB hugepages reported on node 1 00:18:50.624 [2024-11-20 16:09:21.387495] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.624 [2024-11-20 16:09:21.424618] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.562 16:09:22 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:51.562 16:09:22 -- common/autotest_common.sh@862 -- # return 0 00:18:51.562 16:09:22 -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:18:51.822 Nvme0n1 00:18:51.822 16:09:22 -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:18:51.822 [ 00:18:51.822 { 00:18:51.822 "name": "Nvme0n1", 00:18:51.822 "aliases": [ 00:18:51.822 "603ad876-ba7f-4c1e-8d8e-eb519d7e5777" 00:18:51.822 ], 00:18:51.822 "product_name": "NVMe disk", 00:18:51.822 "block_size": 4096, 00:18:51.822 "num_blocks": 38912, 00:18:51.822 "uuid": "603ad876-ba7f-4c1e-8d8e-eb519d7e5777", 00:18:51.822 "assigned_rate_limits": { 00:18:51.822 "rw_ios_per_sec": 0, 00:18:51.822 "rw_mbytes_per_sec": 0, 00:18:51.822 "r_mbytes_per_sec": 0, 00:18:51.822 "w_mbytes_per_sec": 0 00:18:51.822 }, 00:18:51.822 "claimed": false, 00:18:51.822 "zoned": false, 00:18:51.822 "supported_io_types": { 00:18:51.822 "read": true, 00:18:51.822 "write": true, 00:18:51.822 "unmap": true, 00:18:51.822 "write_zeroes": true, 00:18:51.822 "flush": true, 00:18:51.822 "reset": true, 00:18:51.822 "compare": true, 00:18:51.822 "compare_and_write": true, 00:18:51.822 "abort": true, 00:18:51.822 "nvme_admin": true, 00:18:51.822 "nvme_io": true 00:18:51.822 }, 00:18:51.822 "memory_domains": [ 00:18:51.822 { 00:18:51.822 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:18:51.822 "dma_device_type": 0 00:18:51.822 } 00:18:51.822 ], 00:18:51.822 "driver_specific": { 00:18:51.822 "nvme": [ 00:18:51.822 { 00:18:51.822 "trid": { 00:18:51.822 "trtype": "RDMA", 00:18:51.822 "adrfam": "IPv4", 00:18:51.822 "traddr": "192.168.100.8", 00:18:51.822 "trsvcid": "4420", 00:18:51.822 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:18:51.822 }, 00:18:51.822 "ctrlr_data": { 00:18:51.822 "cntlid": 1, 00:18:51.822 "vendor_id": "0x8086", 00:18:51.822 "model_number": "SPDK bdev Controller", 00:18:51.822 "serial_number": "SPDK0", 00:18:51.822 "firmware_revision": "24.01.1", 00:18:51.822 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:18:51.822 "oacs": { 00:18:51.822 "security": 0, 00:18:51.822 "format": 0, 00:18:51.822 "firmware": 0, 00:18:51.822 "ns_manage": 0 00:18:51.822 }, 00:18:51.822 "multi_ctrlr": true, 00:18:51.822 "ana_reporting": false 00:18:51.823 }, 00:18:51.823 "vs": { 00:18:51.823 "nvme_version": "1.3" 00:18:51.823 }, 00:18:51.823 "ns_data": { 00:18:51.823 "id": 1, 00:18:51.823 "can_share": true 00:18:51.823 } 00:18:51.823 } 00:18:51.823 ], 00:18:51.823 "mp_policy": "active_passive" 00:18:51.823 } 00:18:51.823 } 00:18:51.823 ] 00:18:51.823 16:09:22 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1354370 00:18:51.823 16:09:22 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:18:51.823 16:09:22 -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.082 Running I/O for 10 seconds... 00:18:53.021 Latency(us) 00:18:53.021 [2024-11-20T15:09:23.826Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:53.021 [2024-11-20T15:09:23.826Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.021 Nvme0n1 : 1.00 36667.00 143.23 0.00 0.00 0.00 0.00 0.00 00:18:53.021 [2024-11-20T15:09:23.826Z] =================================================================================================================== 00:18:53.021 [2024-11-20T15:09:23.826Z] Total : 36667.00 143.23 0.00 0.00 0.00 0.00 0.00 00:18:53.021 00:18:53.958 16:09:24 -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c22bd12c-7110-41be-8597-a8055e0ab431 00:18:53.958 [2024-11-20T15:09:24.763Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:53.958 Nvme0n1 : 2.00 36973.00 144.43 0.00 0.00 0.00 0.00 0.00 00:18:53.958 [2024-11-20T15:09:24.763Z] =================================================================================================================== 00:18:53.958 [2024-11-20T15:09:24.763Z] Total : 36973.00 144.43 0.00 0.00 0.00 0.00 0.00 00:18:53.958 00:18:54.217 true 00:18:54.217 16:09:24 -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:18:54.217 16:09:24 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:18:54.217 16:09:24 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:18:54.217 16:09:24 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:18:54.217 16:09:24 -- target/nvmf_lvs_grow.sh@65 -- # wait 1354370 00:18:55.155 [2024-11-20T15:09:25.960Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:55.155 Nvme0n1 : 3.00 37107.00 144.95 0.00 0.00 0.00 0.00 0.00 00:18:55.155 [2024-11-20T15:09:25.960Z] =================================================================================================================== 00:18:55.155 [2024-11-20T15:09:25.960Z] Total : 37107.00 144.95 0.00 0.00 0.00 0.00 0.00 00:18:55.155 00:18:56.109 [2024-11-20T15:09:26.914Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:56.109 Nvme0n1 : 4.00 37273.75 145.60 0.00 0.00 0.00 0.00 0.00 00:18:56.109 [2024-11-20T15:09:26.914Z] =================================================================================================================== 00:18:56.109 [2024-11-20T15:09:26.914Z] Total : 37273.75 145.60 0.00 0.00 0.00 0.00 0.00 00:18:56.109 00:18:57.046 [2024-11-20T15:09:27.851Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.046 Nvme0n1 : 5.00 37368.20 145.97 0.00 0.00 0.00 0.00 0.00 00:18:57.046 [2024-11-20T15:09:27.851Z] =================================================================================================================== 00:18:57.046 [2024-11-20T15:09:27.851Z] Total : 37368.20 145.97 0.00 0.00 0.00 0.00 0.00 00:18:57.046 00:18:57.981 [2024-11-20T15:09:28.786Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:57.981 Nvme0n1 : 6.00 37446.17 146.27 0.00 0.00 0.00 0.00 0.00 00:18:57.981 [2024-11-20T15:09:28.786Z] =================================================================================================================== 00:18:57.981 [2024-11-20T15:09:28.786Z] Total : 37446.17 146.27 0.00 0.00 0.00 0.00 0.00 00:18:57.981 00:18:58.987 [2024-11-20T15:09:29.792Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:58.987 Nvme0n1 : 7.00 37475.71 146.39 0.00 0.00 0.00 0.00 0.00 00:18:58.987 [2024-11-20T15:09:29.792Z] =================================================================================================================== 00:18:58.987 [2024-11-20T15:09:29.792Z] Total : 37475.71 146.39 0.00 0.00 0.00 0.00 0.00 00:18:58.987 00:18:59.925 [2024-11-20T15:09:30.730Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:18:59.925 Nvme0n1 : 8.00 37504.88 146.50 0.00 0.00 0.00 0.00 0.00 00:18:59.925 [2024-11-20T15:09:30.730Z] =================================================================================================================== 00:18:59.925 [2024-11-20T15:09:30.730Z] Total : 37504.88 146.50 0.00 0.00 0.00 0.00 0.00 00:18:59.925 00:19:01.303 [2024-11-20T15:09:32.108Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:01.303 Nvme0n1 : 9.00 37542.11 146.65 0.00 0.00 0.00 0.00 0.00 00:19:01.303 [2024-11-20T15:09:32.108Z] =================================================================================================================== 00:19:01.303 [2024-11-20T15:09:32.108Z] Total : 37542.11 146.65 0.00 0.00 0.00 0.00 0.00 00:19:01.303 00:19:02.241 [2024-11-20T15:09:33.046Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:02.241 Nvme0n1 : 10.00 37570.30 146.76 0.00 0.00 0.00 0.00 0.00 00:19:02.241 [2024-11-20T15:09:33.046Z] =================================================================================================================== 00:19:02.241 [2024-11-20T15:09:33.046Z] Total : 37570.30 146.76 0.00 0.00 0.00 0.00 0.00 00:19:02.241 00:19:02.241 00:19:02.241 Latency(us) 00:19:02.241 [2024-11-20T15:09:33.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.241 [2024-11-20T15:09:33.046Z] Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:19:02.241 Nvme0n1 : 10.00 37570.71 146.76 0.00 0.00 3404.56 2097.15 12006.20 00:19:02.241 [2024-11-20T15:09:33.046Z] =================================================================================================================== 00:19:02.241 [2024-11-20T15:09:33.046Z] Total : 37570.71 146.76 0.00 0.00 3404.56 2097.15 12006.20 00:19:02.241 0 00:19:02.241 16:09:32 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1354217 00:19:02.241 16:09:32 -- common/autotest_common.sh@936 -- # '[' -z 1354217 ']' 00:19:02.241 16:09:32 -- common/autotest_common.sh@940 -- # kill -0 1354217 00:19:02.241 16:09:32 -- common/autotest_common.sh@941 -- # uname 00:19:02.241 16:09:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:02.241 16:09:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1354217 00:19:02.241 16:09:32 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:02.241 16:09:32 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:02.241 16:09:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1354217' 00:19:02.241 killing process with pid 1354217 00:19:02.241 16:09:32 -- common/autotest_common.sh@955 -- # kill 1354217 00:19:02.241 Received shutdown signal, test time was about 10.000000 seconds 00:19:02.241 00:19:02.241 Latency(us) 00:19:02.241 [2024-11-20T15:09:33.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:02.241 [2024-11-20T15:09:33.046Z] =================================================================================================================== 00:19:02.241 [2024-11-20T15:09:33.046Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:02.241 16:09:32 -- common/autotest_common.sh@960 -- # wait 1354217 00:19:02.241 16:09:32 -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:19:02.501 16:09:33 -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:19:02.501 16:09:33 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:19:02.761 16:09:33 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:19:02.761 16:09:33 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:19:02.761 16:09:33 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 1350524 00:19:02.761 16:09:33 -- target/nvmf_lvs_grow.sh@74 -- # wait 1350524 00:19:02.761 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 1350524 Killed "${NVMF_APP[@]}" "$@" 00:19:02.761 16:09:33 -- target/nvmf_lvs_grow.sh@74 -- # true 00:19:02.761 16:09:33 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:19:02.761 16:09:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:02.761 16:09:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:02.761 16:09:33 -- common/autotest_common.sh@10 -- # set +x 00:19:02.761 16:09:33 -- nvmf/common.sh@469 -- # nvmfpid=1356204 00:19:02.761 16:09:33 -- nvmf/common.sh@470 -- # waitforlisten 1356204 00:19:02.761 16:09:33 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:02.761 16:09:33 -- common/autotest_common.sh@829 -- # '[' -z 1356204 ']' 00:19:02.761 16:09:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.761 16:09:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:02.761 16:09:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.761 16:09:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:02.761 16:09:33 -- common/autotest_common.sh@10 -- # set +x 00:19:02.761 [2024-11-20 16:09:33.446652] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:02.761 [2024-11-20 16:09:33.446705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.761 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.761 [2024-11-20 16:09:33.518731] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:02.761 [2024-11-20 16:09:33.554600] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:02.761 [2024-11-20 16:09:33.554708] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.761 [2024-11-20 16:09:33.554718] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.761 [2024-11-20 16:09:33.554727] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.761 [2024-11-20 16:09:33.554746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.699 16:09:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:03.699 16:09:34 -- common/autotest_common.sh@862 -- # return 0 00:19:03.699 16:09:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:03.699 16:09:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:03.699 16:09:34 -- common/autotest_common.sh@10 -- # set +x 00:19:03.699 16:09:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:03.699 16:09:34 -- target/nvmf_lvs_grow.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:03.699 [2024-11-20 16:09:34.458829] blobstore.c:4642:bs_recover: *NOTICE*: Performing recovery on blobstore 00:19:03.699 [2024-11-20 16:09:34.458913] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:19:03.699 [2024-11-20 16:09:34.458939] blobstore.c:4589:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:19:03.699 16:09:34 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:19:03.699 16:09:34 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 603ad876-ba7f-4c1e-8d8e-eb519d7e5777 00:19:03.699 16:09:34 -- common/autotest_common.sh@897 -- # local bdev_name=603ad876-ba7f-4c1e-8d8e-eb519d7e5777 00:19:03.699 16:09:34 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:03.699 16:09:34 -- common/autotest_common.sh@899 -- # local i 00:19:03.699 16:09:34 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:03.699 16:09:34 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:03.699 16:09:34 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:03.959 16:09:34 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 603ad876-ba7f-4c1e-8d8e-eb519d7e5777 -t 2000 00:19:04.217 [ 00:19:04.217 { 00:19:04.217 "name": "603ad876-ba7f-4c1e-8d8e-eb519d7e5777", 00:19:04.217 "aliases": [ 00:19:04.217 "lvs/lvol" 00:19:04.217 ], 00:19:04.217 "product_name": "Logical Volume", 00:19:04.217 "block_size": 4096, 00:19:04.217 "num_blocks": 38912, 00:19:04.217 "uuid": "603ad876-ba7f-4c1e-8d8e-eb519d7e5777", 00:19:04.217 "assigned_rate_limits": { 00:19:04.218 "rw_ios_per_sec": 0, 00:19:04.218 "rw_mbytes_per_sec": 0, 00:19:04.218 "r_mbytes_per_sec": 0, 00:19:04.218 "w_mbytes_per_sec": 0 00:19:04.218 }, 00:19:04.218 "claimed": false, 00:19:04.218 "zoned": false, 00:19:04.218 "supported_io_types": { 00:19:04.218 "read": true, 00:19:04.218 "write": true, 00:19:04.218 "unmap": true, 00:19:04.218 "write_zeroes": true, 00:19:04.218 "flush": false, 00:19:04.218 "reset": true, 00:19:04.218 "compare": false, 00:19:04.218 "compare_and_write": false, 00:19:04.218 "abort": false, 00:19:04.218 "nvme_admin": false, 00:19:04.218 "nvme_io": false 00:19:04.218 }, 00:19:04.218 "driver_specific": { 00:19:04.218 "lvol": { 00:19:04.218 "lvol_store_uuid": "c22bd12c-7110-41be-8597-a8055e0ab431", 00:19:04.218 "base_bdev": "aio_bdev", 00:19:04.218 "thin_provision": false, 00:19:04.218 "snapshot": false, 00:19:04.218 "clone": false, 00:19:04.218 "esnap_clone": false 00:19:04.218 } 00:19:04.218 } 00:19:04.218 } 00:19:04.218 ] 00:19:04.218 16:09:34 -- common/autotest_common.sh@905 -- # return 0 00:19:04.218 16:09:34 -- target/nvmf_lvs_grow.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:19:04.218 16:09:34 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:19:04.218 16:09:35 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:19:04.218 16:09:35 -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:19:04.218 16:09:35 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:19:04.477 16:09:35 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:19:04.477 16:09:35 -- target/nvmf_lvs_grow.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:04.736 [2024-11-20 16:09:35.355346] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:19:04.736 16:09:35 -- target/nvmf_lvs_grow.sh@84 -- # NOT /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:19:04.736 16:09:35 -- common/autotest_common.sh@650 -- # local es=0 00:19:04.736 16:09:35 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:19:04.736 16:09:35 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:04.736 16:09:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.736 16:09:35 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:04.736 16:09:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.736 16:09:35 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:04.736 16:09:35 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:04.736 16:09:35 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:04.736 16:09:35 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py ]] 00:19:04.736 16:09:35 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:19:04.995 request: 00:19:04.995 { 00:19:04.995 "uuid": "c22bd12c-7110-41be-8597-a8055e0ab431", 00:19:04.995 "method": "bdev_lvol_get_lvstores", 00:19:04.995 "req_id": 1 00:19:04.995 } 00:19:04.995 Got JSON-RPC error response 00:19:04.995 response: 00:19:04.995 { 00:19:04.995 "code": -19, 00:19:04.995 "message": "No such device" 00:19:04.995 } 00:19:04.995 16:09:35 -- common/autotest_common.sh@653 -- # es=1 00:19:04.995 16:09:35 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:04.995 16:09:35 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:04.995 16:09:35 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:04.995 16:09:35 -- target/nvmf_lvs_grow.sh@85 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:19:04.995 aio_bdev 00:19:04.995 16:09:35 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 603ad876-ba7f-4c1e-8d8e-eb519d7e5777 00:19:04.995 16:09:35 -- common/autotest_common.sh@897 -- # local bdev_name=603ad876-ba7f-4c1e-8d8e-eb519d7e5777 00:19:04.995 16:09:35 -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:19:04.995 16:09:35 -- common/autotest_common.sh@899 -- # local i 00:19:04.995 16:09:35 -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:19:04.995 16:09:35 -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:19:04.995 16:09:35 -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:05.254 16:09:35 -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 603ad876-ba7f-4c1e-8d8e-eb519d7e5777 -t 2000 00:19:05.514 [ 00:19:05.514 { 00:19:05.514 "name": "603ad876-ba7f-4c1e-8d8e-eb519d7e5777", 00:19:05.514 "aliases": [ 00:19:05.514 "lvs/lvol" 00:19:05.514 ], 00:19:05.514 "product_name": "Logical Volume", 00:19:05.514 "block_size": 4096, 00:19:05.514 "num_blocks": 38912, 00:19:05.514 "uuid": "603ad876-ba7f-4c1e-8d8e-eb519d7e5777", 00:19:05.514 "assigned_rate_limits": { 00:19:05.514 "rw_ios_per_sec": 0, 00:19:05.514 "rw_mbytes_per_sec": 0, 00:19:05.514 "r_mbytes_per_sec": 0, 00:19:05.514 "w_mbytes_per_sec": 0 00:19:05.514 }, 00:19:05.514 "claimed": false, 00:19:05.514 "zoned": false, 00:19:05.514 "supported_io_types": { 00:19:05.514 "read": true, 00:19:05.514 "write": true, 00:19:05.514 "unmap": true, 00:19:05.514 "write_zeroes": true, 00:19:05.514 "flush": false, 00:19:05.514 "reset": true, 00:19:05.514 "compare": false, 00:19:05.514 "compare_and_write": false, 00:19:05.514 "abort": false, 00:19:05.514 "nvme_admin": false, 00:19:05.514 "nvme_io": false 00:19:05.514 }, 00:19:05.514 "driver_specific": { 00:19:05.514 "lvol": { 00:19:05.514 "lvol_store_uuid": "c22bd12c-7110-41be-8597-a8055e0ab431", 00:19:05.514 "base_bdev": "aio_bdev", 00:19:05.514 "thin_provision": false, 00:19:05.514 "snapshot": false, 00:19:05.514 "clone": false, 00:19:05.514 "esnap_clone": false 00:19:05.514 } 00:19:05.514 } 00:19:05.514 } 00:19:05.514 ] 00:19:05.514 16:09:36 -- common/autotest_common.sh@905 -- # return 0 00:19:05.514 16:09:36 -- target/nvmf_lvs_grow.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:19:05.514 16:09:36 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:19:05.514 16:09:36 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:19:05.514 16:09:36 -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c22bd12c-7110-41be-8597-a8055e0ab431 00:19:05.514 16:09:36 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:19:05.774 16:09:36 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:19:05.774 16:09:36 -- target/nvmf_lvs_grow.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 603ad876-ba7f-4c1e-8d8e-eb519d7e5777 00:19:06.034 16:09:36 -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c22bd12c-7110-41be-8597-a8055e0ab431 00:19:06.034 16:09:36 -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:19:06.294 16:09:37 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:19:06.294 00:19:06.294 real 0m17.560s 00:19:06.294 user 0m45.386s 00:19:06.294 sys 0m3.256s 00:19:06.294 16:09:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:06.294 16:09:37 -- common/autotest_common.sh@10 -- # set +x 00:19:06.294 ************************************ 00:19:06.294 END TEST lvs_grow_dirty 00:19:06.294 ************************************ 00:19:06.294 16:09:37 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:19:06.294 16:09:37 -- common/autotest_common.sh@806 -- # type=--id 00:19:06.294 16:09:37 -- common/autotest_common.sh@807 -- # id=0 00:19:06.294 16:09:37 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:06.294 16:09:37 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:06.294 16:09:37 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:06.553 16:09:37 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:06.553 16:09:37 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:06.553 16:09:37 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:06.553 nvmf_trace.0 00:19:06.553 16:09:37 -- common/autotest_common.sh@821 -- # return 0 00:19:06.553 16:09:37 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:19:06.553 16:09:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:06.553 16:09:37 -- nvmf/common.sh@116 -- # sync 00:19:06.553 16:09:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:06.553 16:09:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:06.553 16:09:37 -- nvmf/common.sh@119 -- # set +e 00:19:06.553 16:09:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:06.553 16:09:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:06.553 rmmod nvme_rdma 00:19:06.553 rmmod nvme_fabrics 00:19:06.553 16:09:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:06.553 16:09:37 -- nvmf/common.sh@123 -- # set -e 00:19:06.553 16:09:37 -- nvmf/common.sh@124 -- # return 0 00:19:06.553 16:09:37 -- nvmf/common.sh@477 -- # '[' -n 1356204 ']' 00:19:06.553 16:09:37 -- nvmf/common.sh@478 -- # killprocess 1356204 00:19:06.553 16:09:37 -- common/autotest_common.sh@936 -- # '[' -z 1356204 ']' 00:19:06.553 16:09:37 -- common/autotest_common.sh@940 -- # kill -0 1356204 00:19:06.553 16:09:37 -- common/autotest_common.sh@941 -- # uname 00:19:06.553 16:09:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:06.553 16:09:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1356204 00:19:06.553 16:09:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:06.553 16:09:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:06.553 16:09:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1356204' 00:19:06.553 killing process with pid 1356204 00:19:06.553 16:09:37 -- common/autotest_common.sh@955 -- # kill 1356204 00:19:06.553 16:09:37 -- common/autotest_common.sh@960 -- # wait 1356204 00:19:06.819 16:09:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:06.819 16:09:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:06.819 00:19:06.819 real 0m41.946s 00:19:06.819 user 1m7.326s 00:19:06.819 sys 0m10.206s 00:19:06.819 16:09:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:06.819 16:09:37 -- common/autotest_common.sh@10 -- # set +x 00:19:06.819 ************************************ 00:19:06.819 END TEST nvmf_lvs_grow 00:19:06.819 ************************************ 00:19:06.820 16:09:37 -- nvmf/nvmf.sh@49 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:19:06.820 16:09:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:06.820 16:09:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:06.820 16:09:37 -- common/autotest_common.sh@10 -- # set +x 00:19:06.820 ************************************ 00:19:06.820 START TEST nvmf_bdev_io_wait 00:19:06.820 ************************************ 00:19:06.820 16:09:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=rdma 00:19:06.820 * Looking for test storage... 00:19:06.820 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:06.820 16:09:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:06.820 16:09:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:06.820 16:09:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:06.820 16:09:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:06.820 16:09:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:06.820 16:09:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:06.820 16:09:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:06.820 16:09:37 -- scripts/common.sh@335 -- # IFS=.-: 00:19:06.820 16:09:37 -- scripts/common.sh@335 -- # read -ra ver1 00:19:06.820 16:09:37 -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.820 16:09:37 -- scripts/common.sh@336 -- # read -ra ver2 00:19:06.820 16:09:37 -- scripts/common.sh@337 -- # local 'op=<' 00:19:06.820 16:09:37 -- scripts/common.sh@339 -- # ver1_l=2 00:19:07.085 16:09:37 -- scripts/common.sh@340 -- # ver2_l=1 00:19:07.085 16:09:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:07.085 16:09:37 -- scripts/common.sh@343 -- # case "$op" in 00:19:07.085 16:09:37 -- scripts/common.sh@344 -- # : 1 00:19:07.085 16:09:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:07.085 16:09:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:07.085 16:09:37 -- scripts/common.sh@364 -- # decimal 1 00:19:07.085 16:09:37 -- scripts/common.sh@352 -- # local d=1 00:19:07.085 16:09:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:07.085 16:09:37 -- scripts/common.sh@354 -- # echo 1 00:19:07.085 16:09:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:07.085 16:09:37 -- scripts/common.sh@365 -- # decimal 2 00:19:07.085 16:09:37 -- scripts/common.sh@352 -- # local d=2 00:19:07.085 16:09:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:07.085 16:09:37 -- scripts/common.sh@354 -- # echo 2 00:19:07.085 16:09:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:07.085 16:09:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:07.085 16:09:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:07.085 16:09:37 -- scripts/common.sh@367 -- # return 0 00:19:07.085 16:09:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:07.085 16:09:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:07.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.085 --rc genhtml_branch_coverage=1 00:19:07.085 --rc genhtml_function_coverage=1 00:19:07.085 --rc genhtml_legend=1 00:19:07.085 --rc geninfo_all_blocks=1 00:19:07.085 --rc geninfo_unexecuted_blocks=1 00:19:07.085 00:19:07.085 ' 00:19:07.085 16:09:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:07.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.085 --rc genhtml_branch_coverage=1 00:19:07.085 --rc genhtml_function_coverage=1 00:19:07.085 --rc genhtml_legend=1 00:19:07.085 --rc geninfo_all_blocks=1 00:19:07.085 --rc geninfo_unexecuted_blocks=1 00:19:07.085 00:19:07.085 ' 00:19:07.085 16:09:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:07.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.085 --rc genhtml_branch_coverage=1 00:19:07.085 --rc genhtml_function_coverage=1 00:19:07.085 --rc genhtml_legend=1 00:19:07.085 --rc geninfo_all_blocks=1 00:19:07.085 --rc geninfo_unexecuted_blocks=1 00:19:07.085 00:19:07.085 ' 00:19:07.085 16:09:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:07.085 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:07.085 --rc genhtml_branch_coverage=1 00:19:07.085 --rc genhtml_function_coverage=1 00:19:07.085 --rc genhtml_legend=1 00:19:07.085 --rc geninfo_all_blocks=1 00:19:07.085 --rc geninfo_unexecuted_blocks=1 00:19:07.085 00:19:07.085 ' 00:19:07.085 16:09:37 -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:07.085 16:09:37 -- nvmf/common.sh@7 -- # uname -s 00:19:07.085 16:09:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:07.085 16:09:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:07.085 16:09:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:07.085 16:09:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:07.085 16:09:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:07.085 16:09:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:07.085 16:09:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:07.085 16:09:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:07.085 16:09:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:07.085 16:09:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:07.085 16:09:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:07.085 16:09:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:07.085 16:09:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:07.085 16:09:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:07.085 16:09:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:07.085 16:09:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:07.085 16:09:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:07.085 16:09:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:07.085 16:09:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:07.085 16:09:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.085 16:09:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.085 16:09:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.085 16:09:37 -- paths/export.sh@5 -- # export PATH 00:19:07.085 16:09:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:07.085 16:09:37 -- nvmf/common.sh@46 -- # : 0 00:19:07.085 16:09:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:07.085 16:09:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:07.085 16:09:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:07.085 16:09:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:07.085 16:09:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:07.085 16:09:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:07.085 16:09:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:07.085 16:09:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:07.085 16:09:37 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:07.085 16:09:37 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:07.085 16:09:37 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:19:07.085 16:09:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:07.085 16:09:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:07.085 16:09:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:07.085 16:09:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:07.085 16:09:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:07.085 16:09:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:07.085 16:09:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:07.085 16:09:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:07.085 16:09:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:07.085 16:09:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:07.085 16:09:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:07.085 16:09:37 -- common/autotest_common.sh@10 -- # set +x 00:19:13.658 16:09:44 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:13.658 16:09:44 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:13.658 16:09:44 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:13.658 16:09:44 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:13.658 16:09:44 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:13.658 16:09:44 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:13.658 16:09:44 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:13.658 16:09:44 -- nvmf/common.sh@294 -- # net_devs=() 00:19:13.658 16:09:44 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:13.658 16:09:44 -- nvmf/common.sh@295 -- # e810=() 00:19:13.658 16:09:44 -- nvmf/common.sh@295 -- # local -ga e810 00:19:13.658 16:09:44 -- nvmf/common.sh@296 -- # x722=() 00:19:13.658 16:09:44 -- nvmf/common.sh@296 -- # local -ga x722 00:19:13.658 16:09:44 -- nvmf/common.sh@297 -- # mlx=() 00:19:13.658 16:09:44 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:13.658 16:09:44 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:13.658 16:09:44 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:13.658 16:09:44 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:13.658 16:09:44 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:13.658 16:09:44 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:13.658 16:09:44 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:13.658 16:09:44 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:13.658 16:09:44 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:13.658 16:09:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.658 16:09:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:13.659 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:13.659 16:09:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:13.659 16:09:44 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:13.659 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:13.659 16:09:44 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:13.659 16:09:44 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:13.659 16:09:44 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.659 16:09:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.659 16:09:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.659 16:09:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:13.659 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:13.659 16:09:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.659 16:09:44 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:13.659 16:09:44 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:13.659 16:09:44 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:13.659 16:09:44 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:13.659 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:13.659 16:09:44 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:13.659 16:09:44 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:13.659 16:09:44 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:13.659 16:09:44 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:13.659 16:09:44 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:13.659 16:09:44 -- nvmf/common.sh@57 -- # uname 00:19:13.659 16:09:44 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:13.659 16:09:44 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:13.659 16:09:44 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:13.659 16:09:44 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:13.659 16:09:44 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:13.659 16:09:44 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:13.659 16:09:44 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:13.659 16:09:44 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:13.659 16:09:44 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:13.659 16:09:44 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:13.659 16:09:44 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:13.659 16:09:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:13.659 16:09:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:13.659 16:09:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:13.659 16:09:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:13.659 16:09:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:13.659 16:09:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:13.659 16:09:44 -- nvmf/common.sh@104 -- # continue 2 00:19:13.659 16:09:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:13.659 16:09:44 -- nvmf/common.sh@104 -- # continue 2 00:19:13.659 16:09:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:13.659 16:09:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:13.659 16:09:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:13.659 16:09:44 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:13.659 16:09:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:13.659 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:13.659 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:13.659 altname enp217s0f0np0 00:19:13.659 altname ens818f0np0 00:19:13.659 inet 192.168.100.8/24 scope global mlx_0_0 00:19:13.659 valid_lft forever preferred_lft forever 00:19:13.659 16:09:44 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:13.659 16:09:44 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:13.659 16:09:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:13.659 16:09:44 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:13.659 16:09:44 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:13.659 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:13.659 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:13.659 altname enp217s0f1np1 00:19:13.659 altname ens818f1np1 00:19:13.659 inet 192.168.100.9/24 scope global mlx_0_1 00:19:13.659 valid_lft forever preferred_lft forever 00:19:13.659 16:09:44 -- nvmf/common.sh@410 -- # return 0 00:19:13.659 16:09:44 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:13.659 16:09:44 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:13.659 16:09:44 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:13.659 16:09:44 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:13.659 16:09:44 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:13.659 16:09:44 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:13.659 16:09:44 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:13.659 16:09:44 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:13.659 16:09:44 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:13.659 16:09:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:13.659 16:09:44 -- nvmf/common.sh@104 -- # continue 2 00:19:13.659 16:09:44 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:13.659 16:09:44 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:13.659 16:09:44 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:13.659 16:09:44 -- nvmf/common.sh@104 -- # continue 2 00:19:13.659 16:09:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:13.659 16:09:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:13.659 16:09:44 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:13.659 16:09:44 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:13.659 16:09:44 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:13.659 16:09:44 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:13.659 16:09:44 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:13.659 16:09:44 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:13.659 192.168.100.9' 00:19:13.659 16:09:44 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:13.659 192.168.100.9' 00:19:13.659 16:09:44 -- nvmf/common.sh@445 -- # head -n 1 00:19:13.659 16:09:44 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:13.659 16:09:44 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:13.659 192.168.100.9' 00:19:13.659 16:09:44 -- nvmf/common.sh@446 -- # tail -n +2 00:19:13.659 16:09:44 -- nvmf/common.sh@446 -- # head -n 1 00:19:13.659 16:09:44 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:13.659 16:09:44 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:13.659 16:09:44 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:13.659 16:09:44 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:13.659 16:09:44 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:13.659 16:09:44 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:13.659 16:09:44 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:13.659 16:09:44 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:13.659 16:09:44 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:13.659 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:13.659 16:09:44 -- nvmf/common.sh@469 -- # nvmfpid=1360258 00:19:13.659 16:09:44 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:13.659 16:09:44 -- nvmf/common.sh@470 -- # waitforlisten 1360258 00:19:13.659 16:09:44 -- common/autotest_common.sh@829 -- # '[' -z 1360258 ']' 00:19:13.660 16:09:44 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:13.660 16:09:44 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.660 16:09:44 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:13.660 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:13.660 16:09:44 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.660 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:13.660 [2024-11-20 16:09:44.330375] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:13.660 [2024-11-20 16:09:44.330425] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:13.660 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.660 [2024-11-20 16:09:44.400453] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:13.660 [2024-11-20 16:09:44.438967] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:13.660 [2024-11-20 16:09:44.439078] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:13.660 [2024-11-20 16:09:44.439087] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:13.660 [2024-11-20 16:09:44.439096] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:13.660 [2024-11-20 16:09:44.439140] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.660 [2024-11-20 16:09:44.439235] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.660 [2024-11-20 16:09:44.439323] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.660 [2024-11-20 16:09:44.439325] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.919 16:09:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.919 16:09:44 -- common/autotest_common.sh@862 -- # return 0 00:19:13.919 16:09:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:13.919 16:09:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.919 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:13.919 16:09:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.919 16:09:44 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:19:13.919 16:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.919 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:13.919 16:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.919 16:09:44 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:19:13.919 16:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.919 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:13.919 16:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.919 16:09:44 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:13.919 16:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.919 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:13.919 [2024-11-20 16:09:44.611679] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1f22070/0x1f26540) succeed. 00:19:13.919 [2024-11-20 16:09:44.620551] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1f23610/0x1f67be0) succeed. 00:19:14.179 16:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:14.179 16:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.179 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.179 Malloc0 00:19:14.179 16:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:14.179 16:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.179 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.179 16:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:14.179 16:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.179 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.179 16:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:14.179 16:09:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.179 16:09:44 -- common/autotest_common.sh@10 -- # set +x 00:19:14.179 [2024-11-20 16:09:44.799293] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:14.179 16:09:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1360381 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@30 -- # READ_PID=1360384 00:19:14.179 16:09:44 -- nvmf/common.sh@520 -- # config=() 00:19:14.179 16:09:44 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.179 16:09:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.179 16:09:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.179 { 00:19:14.179 "params": { 00:19:14.179 "name": "Nvme$subsystem", 00:19:14.179 "trtype": "$TEST_TRANSPORT", 00:19:14.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.179 "adrfam": "ipv4", 00:19:14.179 "trsvcid": "$NVMF_PORT", 00:19:14.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.179 "hdgst": ${hdgst:-false}, 00:19:14.179 "ddgst": ${ddgst:-false} 00:19:14.179 }, 00:19:14.179 "method": "bdev_nvme_attach_controller" 00:19:14.179 } 00:19:14.179 EOF 00:19:14.179 )") 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1360387 00:19:14.179 16:09:44 -- nvmf/common.sh@520 -- # config=() 00:19:14.179 16:09:44 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.179 16:09:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.179 16:09:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.179 { 00:19:14.179 "params": { 00:19:14.179 "name": "Nvme$subsystem", 00:19:14.179 "trtype": "$TEST_TRANSPORT", 00:19:14.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.179 "adrfam": "ipv4", 00:19:14.179 "trsvcid": "$NVMF_PORT", 00:19:14.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.179 "hdgst": ${hdgst:-false}, 00:19:14.179 "ddgst": ${ddgst:-false} 00:19:14.179 }, 00:19:14.179 "method": "bdev_nvme_attach_controller" 00:19:14.179 } 00:19:14.179 EOF 00:19:14.179 )") 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1360391 00:19:14.179 16:09:44 -- nvmf/common.sh@542 -- # cat 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@35 -- # sync 00:19:14.179 16:09:44 -- nvmf/common.sh@520 -- # config=() 00:19:14.179 16:09:44 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.179 16:09:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.179 16:09:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.179 { 00:19:14.179 "params": { 00:19:14.179 "name": "Nvme$subsystem", 00:19:14.179 "trtype": "$TEST_TRANSPORT", 00:19:14.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.179 "adrfam": "ipv4", 00:19:14.179 "trsvcid": "$NVMF_PORT", 00:19:14.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.179 "hdgst": ${hdgst:-false}, 00:19:14.179 "ddgst": ${ddgst:-false} 00:19:14.179 }, 00:19:14.179 "method": "bdev_nvme_attach_controller" 00:19:14.179 } 00:19:14.179 EOF 00:19:14.179 )") 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:19:14.179 16:09:44 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:19:14.179 16:09:44 -- nvmf/common.sh@520 -- # config=() 00:19:14.179 16:09:44 -- nvmf/common.sh@542 -- # cat 00:19:14.179 16:09:44 -- nvmf/common.sh@520 -- # local subsystem config 00:19:14.179 16:09:44 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:19:14.179 16:09:44 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:19:14.179 { 00:19:14.179 "params": { 00:19:14.179 "name": "Nvme$subsystem", 00:19:14.179 "trtype": "$TEST_TRANSPORT", 00:19:14.179 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:14.179 "adrfam": "ipv4", 00:19:14.179 "trsvcid": "$NVMF_PORT", 00:19:14.179 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:14.179 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:14.179 "hdgst": ${hdgst:-false}, 00:19:14.179 "ddgst": ${ddgst:-false} 00:19:14.179 }, 00:19:14.179 "method": "bdev_nvme_attach_controller" 00:19:14.180 } 00:19:14.180 EOF 00:19:14.180 )") 00:19:14.180 16:09:44 -- nvmf/common.sh@542 -- # cat 00:19:14.180 16:09:44 -- target/bdev_io_wait.sh@37 -- # wait 1360381 00:19:14.180 16:09:44 -- nvmf/common.sh@542 -- # cat 00:19:14.180 16:09:44 -- nvmf/common.sh@544 -- # jq . 00:19:14.180 16:09:44 -- nvmf/common.sh@544 -- # jq . 00:19:14.180 16:09:44 -- nvmf/common.sh@544 -- # jq . 00:19:14.180 16:09:44 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.180 16:09:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.180 "params": { 00:19:14.180 "name": "Nvme1", 00:19:14.180 "trtype": "rdma", 00:19:14.180 "traddr": "192.168.100.8", 00:19:14.180 "adrfam": "ipv4", 00:19:14.180 "trsvcid": "4420", 00:19:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.180 "hdgst": false, 00:19:14.180 "ddgst": false 00:19:14.180 }, 00:19:14.180 "method": "bdev_nvme_attach_controller" 00:19:14.180 }' 00:19:14.180 16:09:44 -- nvmf/common.sh@544 -- # jq . 00:19:14.180 16:09:44 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.180 16:09:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.180 "params": { 00:19:14.180 "name": "Nvme1", 00:19:14.180 "trtype": "rdma", 00:19:14.180 "traddr": "192.168.100.8", 00:19:14.180 "adrfam": "ipv4", 00:19:14.180 "trsvcid": "4420", 00:19:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.180 "hdgst": false, 00:19:14.180 "ddgst": false 00:19:14.180 }, 00:19:14.180 "method": "bdev_nvme_attach_controller" 00:19:14.180 }' 00:19:14.180 16:09:44 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.180 16:09:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.180 "params": { 00:19:14.180 "name": "Nvme1", 00:19:14.180 "trtype": "rdma", 00:19:14.180 "traddr": "192.168.100.8", 00:19:14.180 "adrfam": "ipv4", 00:19:14.180 "trsvcid": "4420", 00:19:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.180 "hdgst": false, 00:19:14.180 "ddgst": false 00:19:14.180 }, 00:19:14.180 "method": "bdev_nvme_attach_controller" 00:19:14.180 }' 00:19:14.180 16:09:44 -- nvmf/common.sh@545 -- # IFS=, 00:19:14.180 16:09:44 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:19:14.180 "params": { 00:19:14.180 "name": "Nvme1", 00:19:14.180 "trtype": "rdma", 00:19:14.180 "traddr": "192.168.100.8", 00:19:14.180 "adrfam": "ipv4", 00:19:14.180 "trsvcid": "4420", 00:19:14.180 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.180 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.180 "hdgst": false, 00:19:14.180 "ddgst": false 00:19:14.180 }, 00:19:14.180 "method": "bdev_nvme_attach_controller" 00:19:14.180 }' 00:19:14.180 [2024-11-20 16:09:44.849056] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:14.180 [2024-11-20 16:09:44.849059] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:14.180 [2024-11-20 16:09:44.849106] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 16:09:44.849106] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:19:14.180 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:14.180 [2024-11-20 16:09:44.852742] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:14.180 [2024-11-20 16:09:44.852744] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:14.180 [2024-11-20 16:09:44.852800] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 16:09:44.852800] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:19:14.180 .cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:19:14.180 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.180 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.440 [2024-11-20 16:09:45.013474] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.440 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.440 [2024-11-20 16:09:45.036005] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:19:14.440 [2024-11-20 16:09:45.070247] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.440 [2024-11-20 16:09:45.091410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:19:14.440 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.440 [2024-11-20 16:09:45.164412] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.440 [2024-11-20 16:09:45.188176] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:19:14.699 [2024-11-20 16:09:45.271463] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.699 [2024-11-20 16:09:45.301528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:19:14.699 Running I/O for 1 seconds... 00:19:14.699 Running I/O for 1 seconds... 00:19:14.699 Running I/O for 1 seconds... 00:19:14.699 Running I/O for 1 seconds... 00:19:15.639 00:19:15.639 Latency(us) 00:19:15.639 [2024-11-20T15:09:46.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.639 [2024-11-20T15:09:46.444Z] Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:19:15.639 Nvme1n1 : 1.01 17904.04 69.94 0.00 0.00 7127.41 3722.44 14575.21 00:19:15.639 [2024-11-20T15:09:46.444Z] =================================================================================================================== 00:19:15.639 [2024-11-20T15:09:46.444Z] Total : 17904.04 69.94 0.00 0.00 7127.41 3722.44 14575.21 00:19:15.639 00:19:15.639 Latency(us) 00:19:15.639 [2024-11-20T15:09:46.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.639 [2024-11-20T15:09:46.444Z] Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:19:15.639 Nvme1n1 : 1.01 14714.09 57.48 0.00 0.00 8673.62 5006.95 18140.36 00:19:15.639 [2024-11-20T15:09:46.444Z] =================================================================================================================== 00:19:15.639 [2024-11-20T15:09:46.444Z] Total : 14714.09 57.48 0.00 0.00 8673.62 5006.95 18140.36 00:19:15.639 00:19:15.639 Latency(us) 00:19:15.639 [2024-11-20T15:09:46.444Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.639 [2024-11-20T15:09:46.444Z] Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:19:15.639 Nvme1n1 : 1.00 264685.83 1033.93 0.00 0.00 482.32 193.33 1795.69 00:19:15.639 [2024-11-20T15:09:46.444Z] =================================================================================================================== 00:19:15.639 [2024-11-20T15:09:46.445Z] Total : 264685.83 1033.93 0.00 0.00 482.32 193.33 1795.69 00:19:15.899 00:19:15.899 Latency(us) 00:19:15.899 [2024-11-20T15:09:46.704Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:15.899 [2024-11-20T15:09:46.704Z] Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:19:15.899 Nvme1n1 : 1.00 17095.63 66.78 0.00 0.00 7470.91 3460.30 17301.50 00:19:15.899 [2024-11-20T15:09:46.704Z] =================================================================================================================== 00:19:15.899 [2024-11-20T15:09:46.704Z] Total : 17095.63 66.78 0.00 0.00 7470.91 3460.30 17301.50 00:19:15.899 16:09:46 -- target/bdev_io_wait.sh@38 -- # wait 1360384 00:19:16.159 16:09:46 -- target/bdev_io_wait.sh@39 -- # wait 1360387 00:19:16.159 16:09:46 -- target/bdev_io_wait.sh@40 -- # wait 1360391 00:19:16.159 16:09:46 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:16.159 16:09:46 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.159 16:09:46 -- common/autotest_common.sh@10 -- # set +x 00:19:16.159 16:09:46 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.159 16:09:46 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:19:16.159 16:09:46 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:19:16.159 16:09:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:16.159 16:09:46 -- nvmf/common.sh@116 -- # sync 00:19:16.159 16:09:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:16.159 16:09:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:16.159 16:09:46 -- nvmf/common.sh@119 -- # set +e 00:19:16.159 16:09:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:16.159 16:09:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:16.159 rmmod nvme_rdma 00:19:16.159 rmmod nvme_fabrics 00:19:16.159 16:09:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:16.159 16:09:46 -- nvmf/common.sh@123 -- # set -e 00:19:16.159 16:09:46 -- nvmf/common.sh@124 -- # return 0 00:19:16.159 16:09:46 -- nvmf/common.sh@477 -- # '[' -n 1360258 ']' 00:19:16.159 16:09:46 -- nvmf/common.sh@478 -- # killprocess 1360258 00:19:16.159 16:09:46 -- common/autotest_common.sh@936 -- # '[' -z 1360258 ']' 00:19:16.159 16:09:46 -- common/autotest_common.sh@940 -- # kill -0 1360258 00:19:16.159 16:09:46 -- common/autotest_common.sh@941 -- # uname 00:19:16.159 16:09:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:16.159 16:09:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1360258 00:19:16.418 16:09:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:16.418 16:09:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:16.418 16:09:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1360258' 00:19:16.418 killing process with pid 1360258 00:19:16.418 16:09:46 -- common/autotest_common.sh@955 -- # kill 1360258 00:19:16.418 16:09:46 -- common/autotest_common.sh@960 -- # wait 1360258 00:19:16.677 16:09:47 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:16.678 16:09:47 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:16.678 00:19:16.678 real 0m9.770s 00:19:16.678 user 0m18.238s 00:19:16.678 sys 0m6.449s 00:19:16.678 16:09:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:16.678 16:09:47 -- common/autotest_common.sh@10 -- # set +x 00:19:16.678 ************************************ 00:19:16.678 END TEST nvmf_bdev_io_wait 00:19:16.678 ************************************ 00:19:16.678 16:09:47 -- nvmf/nvmf.sh@50 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:16.678 16:09:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:16.678 16:09:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:16.678 16:09:47 -- common/autotest_common.sh@10 -- # set +x 00:19:16.678 ************************************ 00:19:16.678 START TEST nvmf_queue_depth 00:19:16.678 ************************************ 00:19:16.678 16:09:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=rdma 00:19:16.678 * Looking for test storage... 00:19:16.678 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:16.678 16:09:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:16.678 16:09:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:16.678 16:09:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:16.678 16:09:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:16.678 16:09:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:16.678 16:09:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:16.678 16:09:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:16.678 16:09:47 -- scripts/common.sh@335 -- # IFS=.-: 00:19:16.678 16:09:47 -- scripts/common.sh@335 -- # read -ra ver1 00:19:16.678 16:09:47 -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.678 16:09:47 -- scripts/common.sh@336 -- # read -ra ver2 00:19:16.678 16:09:47 -- scripts/common.sh@337 -- # local 'op=<' 00:19:16.678 16:09:47 -- scripts/common.sh@339 -- # ver1_l=2 00:19:16.678 16:09:47 -- scripts/common.sh@340 -- # ver2_l=1 00:19:16.678 16:09:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:16.678 16:09:47 -- scripts/common.sh@343 -- # case "$op" in 00:19:16.678 16:09:47 -- scripts/common.sh@344 -- # : 1 00:19:16.678 16:09:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:16.678 16:09:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.678 16:09:47 -- scripts/common.sh@364 -- # decimal 1 00:19:16.678 16:09:47 -- scripts/common.sh@352 -- # local d=1 00:19:16.678 16:09:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.678 16:09:47 -- scripts/common.sh@354 -- # echo 1 00:19:16.678 16:09:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:16.678 16:09:47 -- scripts/common.sh@365 -- # decimal 2 00:19:16.678 16:09:47 -- scripts/common.sh@352 -- # local d=2 00:19:16.678 16:09:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.678 16:09:47 -- scripts/common.sh@354 -- # echo 2 00:19:16.678 16:09:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:16.678 16:09:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:16.678 16:09:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:16.678 16:09:47 -- scripts/common.sh@367 -- # return 0 00:19:16.678 16:09:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.678 16:09:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.678 --rc genhtml_branch_coverage=1 00:19:16.678 --rc genhtml_function_coverage=1 00:19:16.678 --rc genhtml_legend=1 00:19:16.678 --rc geninfo_all_blocks=1 00:19:16.678 --rc geninfo_unexecuted_blocks=1 00:19:16.678 00:19:16.678 ' 00:19:16.678 16:09:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.678 --rc genhtml_branch_coverage=1 00:19:16.678 --rc genhtml_function_coverage=1 00:19:16.678 --rc genhtml_legend=1 00:19:16.678 --rc geninfo_all_blocks=1 00:19:16.678 --rc geninfo_unexecuted_blocks=1 00:19:16.678 00:19:16.678 ' 00:19:16.678 16:09:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.678 --rc genhtml_branch_coverage=1 00:19:16.678 --rc genhtml_function_coverage=1 00:19:16.678 --rc genhtml_legend=1 00:19:16.678 --rc geninfo_all_blocks=1 00:19:16.678 --rc geninfo_unexecuted_blocks=1 00:19:16.678 00:19:16.678 ' 00:19:16.678 16:09:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:16.678 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.678 --rc genhtml_branch_coverage=1 00:19:16.678 --rc genhtml_function_coverage=1 00:19:16.678 --rc genhtml_legend=1 00:19:16.678 --rc geninfo_all_blocks=1 00:19:16.678 --rc geninfo_unexecuted_blocks=1 00:19:16.678 00:19:16.678 ' 00:19:16.678 16:09:47 -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.678 16:09:47 -- nvmf/common.sh@7 -- # uname -s 00:19:16.678 16:09:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.678 16:09:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.678 16:09:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.678 16:09:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.678 16:09:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:16.678 16:09:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:16.678 16:09:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.678 16:09:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:16.678 16:09:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.678 16:09:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:16.938 16:09:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:16.938 16:09:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:16.938 16:09:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.938 16:09:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:16.938 16:09:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:16.938 16:09:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:16.938 16:09:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.938 16:09:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.938 16:09:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.938 16:09:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.938 16:09:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.938 16:09:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.938 16:09:47 -- paths/export.sh@5 -- # export PATH 00:19:16.939 16:09:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.939 16:09:47 -- nvmf/common.sh@46 -- # : 0 00:19:16.939 16:09:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:16.939 16:09:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:16.939 16:09:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:16.939 16:09:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.939 16:09:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.939 16:09:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:16.939 16:09:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:16.939 16:09:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:16.939 16:09:47 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:19:16.939 16:09:47 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:19:16.939 16:09:47 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.939 16:09:47 -- target/queue_depth.sh@19 -- # nvmftestinit 00:19:16.939 16:09:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:16.939 16:09:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.939 16:09:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:16.939 16:09:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:16.939 16:09:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:16.939 16:09:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:16.939 16:09:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:16.939 16:09:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:16.939 16:09:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:16.939 16:09:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:16.939 16:09:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:16.939 16:09:47 -- common/autotest_common.sh@10 -- # set +x 00:19:23.513 16:09:53 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:23.513 16:09:53 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:23.513 16:09:53 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:23.513 16:09:53 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:23.513 16:09:53 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:23.513 16:09:53 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:23.513 16:09:53 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:23.513 16:09:53 -- nvmf/common.sh@294 -- # net_devs=() 00:19:23.513 16:09:53 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:23.513 16:09:53 -- nvmf/common.sh@295 -- # e810=() 00:19:23.513 16:09:53 -- nvmf/common.sh@295 -- # local -ga e810 00:19:23.513 16:09:53 -- nvmf/common.sh@296 -- # x722=() 00:19:23.513 16:09:53 -- nvmf/common.sh@296 -- # local -ga x722 00:19:23.513 16:09:53 -- nvmf/common.sh@297 -- # mlx=() 00:19:23.513 16:09:53 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:23.513 16:09:53 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.513 16:09:53 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:23.513 16:09:53 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:23.513 16:09:53 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:23.513 16:09:53 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:23.513 16:09:53 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:23.513 16:09:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.513 16:09:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:23.513 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:23.513 16:09:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.513 16:09:53 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:23.513 16:09:53 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:23.513 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:23.513 16:09:53 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:23.513 16:09:53 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:23.514 16:09:53 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:23.514 16:09:53 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.514 16:09:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.514 16:09:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.514 16:09:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:23.514 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:23.514 16:09:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.514 16:09:53 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.514 16:09:53 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:23.514 16:09:53 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.514 16:09:53 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:23.514 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:23.514 16:09:53 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.514 16:09:53 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:23.514 16:09:53 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:23.514 16:09:53 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:23.514 16:09:53 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:23.514 16:09:53 -- nvmf/common.sh@57 -- # uname 00:19:23.514 16:09:53 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:23.514 16:09:53 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:23.514 16:09:53 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:23.514 16:09:53 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:23.514 16:09:53 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:23.514 16:09:53 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:23.514 16:09:53 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:23.514 16:09:53 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:23.514 16:09:53 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:23.514 16:09:53 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:23.514 16:09:53 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:23.514 16:09:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.514 16:09:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:23.514 16:09:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:23.514 16:09:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.514 16:09:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:23.514 16:09:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:23.514 16:09:53 -- nvmf/common.sh@104 -- # continue 2 00:19:23.514 16:09:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:23.514 16:09:53 -- nvmf/common.sh@104 -- # continue 2 00:19:23.514 16:09:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:23.514 16:09:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:23.514 16:09:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:23.514 16:09:53 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:23.514 16:09:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:23.514 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.514 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:23.514 altname enp217s0f0np0 00:19:23.514 altname ens818f0np0 00:19:23.514 inet 192.168.100.8/24 scope global mlx_0_0 00:19:23.514 valid_lft forever preferred_lft forever 00:19:23.514 16:09:53 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:23.514 16:09:53 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:23.514 16:09:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:23.514 16:09:53 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:23.514 16:09:53 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:23.514 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:23.514 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:23.514 altname enp217s0f1np1 00:19:23.514 altname ens818f1np1 00:19:23.514 inet 192.168.100.9/24 scope global mlx_0_1 00:19:23.514 valid_lft forever preferred_lft forever 00:19:23.514 16:09:53 -- nvmf/common.sh@410 -- # return 0 00:19:23.514 16:09:53 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:23.514 16:09:53 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:23.514 16:09:53 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:23.514 16:09:53 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:23.514 16:09:53 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:23.514 16:09:53 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:23.514 16:09:53 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:23.514 16:09:53 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:23.514 16:09:53 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:23.514 16:09:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:23.514 16:09:53 -- nvmf/common.sh@104 -- # continue 2 00:19:23.514 16:09:53 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:23.514 16:09:53 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:23.514 16:09:53 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:23.514 16:09:53 -- nvmf/common.sh@104 -- # continue 2 00:19:23.514 16:09:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:23.514 16:09:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:23.514 16:09:53 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:23.514 16:09:53 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:23.514 16:09:53 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:23.514 16:09:53 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:23.514 16:09:53 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:23.514 16:09:53 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:23.514 192.168.100.9' 00:19:23.514 16:09:53 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:23.514 192.168.100.9' 00:19:23.514 16:09:53 -- nvmf/common.sh@445 -- # head -n 1 00:19:23.514 16:09:53 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:23.514 16:09:53 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:23.514 192.168.100.9' 00:19:23.514 16:09:53 -- nvmf/common.sh@446 -- # tail -n +2 00:19:23.514 16:09:53 -- nvmf/common.sh@446 -- # head -n 1 00:19:23.514 16:09:53 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:23.514 16:09:53 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:23.514 16:09:53 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:23.514 16:09:53 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:23.514 16:09:53 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:23.514 16:09:53 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:23.514 16:09:53 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:19:23.514 16:09:53 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:23.514 16:09:53 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:23.514 16:09:53 -- common/autotest_common.sh@10 -- # set +x 00:19:23.514 16:09:53 -- nvmf/common.sh@469 -- # nvmfpid=1364035 00:19:23.514 16:09:53 -- nvmf/common.sh@470 -- # waitforlisten 1364035 00:19:23.514 16:09:53 -- common/autotest_common.sh@829 -- # '[' -z 1364035 ']' 00:19:23.514 16:09:53 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.514 16:09:53 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:23.514 16:09:53 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.514 16:09:53 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:23.514 16:09:53 -- common/autotest_common.sh@10 -- # set +x 00:19:23.514 16:09:53 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:23.514 [2024-11-20 16:09:53.679365] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:23.514 [2024-11-20 16:09:53.679414] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.514 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.514 [2024-11-20 16:09:53.750066] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:23.514 [2024-11-20 16:09:53.786421] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:23.514 [2024-11-20 16:09:53.786533] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.515 [2024-11-20 16:09:53.786544] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.515 [2024-11-20 16:09:53.786552] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.515 [2024-11-20 16:09:53.786580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.774 16:09:54 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:23.774 16:09:54 -- common/autotest_common.sh@862 -- # return 0 00:19:23.774 16:09:54 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:23.774 16:09:54 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.774 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:19:23.774 16:09:54 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.774 16:09:54 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:23.774 16:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.774 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:19:23.774 [2024-11-20 16:09:54.548437] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xc37550/0xc3ba00) succeed. 00:19:23.774 [2024-11-20 16:09:54.557281] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xc38a00/0xc7d0a0) succeed. 00:19:24.034 16:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.034 16:09:54 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:24.034 16:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.034 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.034 Malloc0 00:19:24.034 16:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.034 16:09:54 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:24.034 16:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.034 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.034 16:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.034 16:09:54 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:24.034 16:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.034 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.034 16:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.034 16:09:54 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:24.034 16:09:54 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.034 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.034 [2024-11-20 16:09:54.646494] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:24.034 16:09:54 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.034 16:09:54 -- target/queue_depth.sh@30 -- # bdevperf_pid=1364315 00:19:24.034 16:09:54 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:24.034 16:09:54 -- target/queue_depth.sh@33 -- # waitforlisten 1364315 /var/tmp/bdevperf.sock 00:19:24.034 16:09:54 -- common/autotest_common.sh@829 -- # '[' -z 1364315 ']' 00:19:24.034 16:09:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:24.034 16:09:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:24.034 16:09:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:24.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:24.034 16:09:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:24.034 16:09:54 -- common/autotest_common.sh@10 -- # set +x 00:19:24.034 16:09:54 -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:19:24.034 [2024-11-20 16:09:54.693523] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:24.035 [2024-11-20 16:09:54.693572] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1364315 ] 00:19:24.035 EAL: No free 2048 kB hugepages reported on node 1 00:19:24.035 [2024-11-20 16:09:54.762959] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.035 [2024-11-20 16:09:54.798767] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.971 16:09:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:24.971 16:09:55 -- common/autotest_common.sh@862 -- # return 0 00:19:24.971 16:09:55 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:19:24.971 16:09:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.971 16:09:55 -- common/autotest_common.sh@10 -- # set +x 00:19:24.971 NVMe0n1 00:19:24.971 16:09:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.971 16:09:55 -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:24.971 Running I/O for 10 seconds... 00:19:34.956 00:19:34.956 Latency(us) 00:19:34.956 [2024-11-20T15:10:05.761Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.956 [2024-11-20T15:10:05.761Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:19:34.956 Verification LBA range: start 0x0 length 0x4000 00:19:34.956 NVMe0n1 : 10.03 29476.70 115.14 0.00 0.00 34661.33 7916.75 35022.44 00:19:34.956 [2024-11-20T15:10:05.761Z] =================================================================================================================== 00:19:34.956 [2024-11-20T15:10:05.761Z] Total : 29476.70 115.14 0.00 0.00 34661.33 7916.75 35022.44 00:19:34.956 0 00:19:34.956 16:10:05 -- target/queue_depth.sh@39 -- # killprocess 1364315 00:19:34.956 16:10:05 -- common/autotest_common.sh@936 -- # '[' -z 1364315 ']' 00:19:34.956 16:10:05 -- common/autotest_common.sh@940 -- # kill -0 1364315 00:19:34.956 16:10:05 -- common/autotest_common.sh@941 -- # uname 00:19:34.956 16:10:05 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:34.956 16:10:05 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1364315 00:19:35.215 16:10:05 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:35.215 16:10:05 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:35.215 16:10:05 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1364315' 00:19:35.215 killing process with pid 1364315 00:19:35.215 16:10:05 -- common/autotest_common.sh@955 -- # kill 1364315 00:19:35.215 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.215 00:19:35.215 Latency(us) 00:19:35.215 [2024-11-20T15:10:06.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.215 [2024-11-20T15:10:06.020Z] =================================================================================================================== 00:19:35.215 [2024-11-20T15:10:06.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:35.215 16:10:05 -- common/autotest_common.sh@960 -- # wait 1364315 00:19:35.215 16:10:05 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:19:35.215 16:10:05 -- target/queue_depth.sh@43 -- # nvmftestfini 00:19:35.215 16:10:05 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:35.215 16:10:05 -- nvmf/common.sh@116 -- # sync 00:19:35.215 16:10:05 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:35.215 16:10:06 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:35.215 16:10:06 -- nvmf/common.sh@119 -- # set +e 00:19:35.215 16:10:06 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:35.215 16:10:06 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:35.215 rmmod nvme_rdma 00:19:35.474 rmmod nvme_fabrics 00:19:35.474 16:10:06 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:35.474 16:10:06 -- nvmf/common.sh@123 -- # set -e 00:19:35.474 16:10:06 -- nvmf/common.sh@124 -- # return 0 00:19:35.474 16:10:06 -- nvmf/common.sh@477 -- # '[' -n 1364035 ']' 00:19:35.474 16:10:06 -- nvmf/common.sh@478 -- # killprocess 1364035 00:19:35.474 16:10:06 -- common/autotest_common.sh@936 -- # '[' -z 1364035 ']' 00:19:35.474 16:10:06 -- common/autotest_common.sh@940 -- # kill -0 1364035 00:19:35.474 16:10:06 -- common/autotest_common.sh@941 -- # uname 00:19:35.474 16:10:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:35.474 16:10:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1364035 00:19:35.474 16:10:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:35.474 16:10:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:35.475 16:10:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1364035' 00:19:35.475 killing process with pid 1364035 00:19:35.475 16:10:06 -- common/autotest_common.sh@955 -- # kill 1364035 00:19:35.475 16:10:06 -- common/autotest_common.sh@960 -- # wait 1364035 00:19:35.734 16:10:06 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:35.734 16:10:06 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:35.734 00:19:35.734 real 0m19.056s 00:19:35.734 user 0m25.958s 00:19:35.734 sys 0m5.381s 00:19:35.734 16:10:06 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:35.734 16:10:06 -- common/autotest_common.sh@10 -- # set +x 00:19:35.734 ************************************ 00:19:35.734 END TEST nvmf_queue_depth 00:19:35.734 ************************************ 00:19:35.734 16:10:06 -- nvmf/nvmf.sh@51 -- # run_test nvmf_multipath /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:35.734 16:10:06 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:35.734 16:10:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:35.734 16:10:06 -- common/autotest_common.sh@10 -- # set +x 00:19:35.734 ************************************ 00:19:35.734 START TEST nvmf_multipath 00:19:35.734 ************************************ 00:19:35.734 16:10:06 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=rdma 00:19:35.734 * Looking for test storage... 00:19:35.734 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:35.734 16:10:06 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:35.734 16:10:06 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:35.734 16:10:06 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:35.734 16:10:06 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:35.734 16:10:06 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:35.734 16:10:06 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:35.734 16:10:06 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:35.734 16:10:06 -- scripts/common.sh@335 -- # IFS=.-: 00:19:35.734 16:10:06 -- scripts/common.sh@335 -- # read -ra ver1 00:19:35.734 16:10:06 -- scripts/common.sh@336 -- # IFS=.-: 00:19:35.734 16:10:06 -- scripts/common.sh@336 -- # read -ra ver2 00:19:35.734 16:10:06 -- scripts/common.sh@337 -- # local 'op=<' 00:19:35.734 16:10:06 -- scripts/common.sh@339 -- # ver1_l=2 00:19:35.734 16:10:06 -- scripts/common.sh@340 -- # ver2_l=1 00:19:35.734 16:10:06 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:35.734 16:10:06 -- scripts/common.sh@343 -- # case "$op" in 00:19:35.734 16:10:06 -- scripts/common.sh@344 -- # : 1 00:19:35.734 16:10:06 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:35.734 16:10:06 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:35.734 16:10:06 -- scripts/common.sh@364 -- # decimal 1 00:19:35.734 16:10:06 -- scripts/common.sh@352 -- # local d=1 00:19:35.734 16:10:06 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:35.734 16:10:06 -- scripts/common.sh@354 -- # echo 1 00:19:35.734 16:10:06 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:35.734 16:10:06 -- scripts/common.sh@365 -- # decimal 2 00:19:35.994 16:10:06 -- scripts/common.sh@352 -- # local d=2 00:19:35.994 16:10:06 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:35.994 16:10:06 -- scripts/common.sh@354 -- # echo 2 00:19:35.994 16:10:06 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:35.994 16:10:06 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:35.994 16:10:06 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:35.994 16:10:06 -- scripts/common.sh@367 -- # return 0 00:19:35.994 16:10:06 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:35.994 16:10:06 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.994 --rc genhtml_branch_coverage=1 00:19:35.994 --rc genhtml_function_coverage=1 00:19:35.994 --rc genhtml_legend=1 00:19:35.994 --rc geninfo_all_blocks=1 00:19:35.994 --rc geninfo_unexecuted_blocks=1 00:19:35.994 00:19:35.994 ' 00:19:35.994 16:10:06 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.994 --rc genhtml_branch_coverage=1 00:19:35.994 --rc genhtml_function_coverage=1 00:19:35.994 --rc genhtml_legend=1 00:19:35.994 --rc geninfo_all_blocks=1 00:19:35.994 --rc geninfo_unexecuted_blocks=1 00:19:35.994 00:19:35.994 ' 00:19:35.994 16:10:06 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.994 --rc genhtml_branch_coverage=1 00:19:35.994 --rc genhtml_function_coverage=1 00:19:35.994 --rc genhtml_legend=1 00:19:35.994 --rc geninfo_all_blocks=1 00:19:35.994 --rc geninfo_unexecuted_blocks=1 00:19:35.994 00:19:35.994 ' 00:19:35.994 16:10:06 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:35.994 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:35.994 --rc genhtml_branch_coverage=1 00:19:35.994 --rc genhtml_function_coverage=1 00:19:35.994 --rc genhtml_legend=1 00:19:35.994 --rc geninfo_all_blocks=1 00:19:35.994 --rc geninfo_unexecuted_blocks=1 00:19:35.994 00:19:35.994 ' 00:19:35.994 16:10:06 -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:35.994 16:10:06 -- nvmf/common.sh@7 -- # uname -s 00:19:35.994 16:10:06 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:35.994 16:10:06 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:35.994 16:10:06 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:35.995 16:10:06 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:35.995 16:10:06 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:35.995 16:10:06 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:35.995 16:10:06 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:35.995 16:10:06 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:35.995 16:10:06 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:35.995 16:10:06 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:35.995 16:10:06 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:35.995 16:10:06 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:35.995 16:10:06 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:35.995 16:10:06 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:35.995 16:10:06 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:35.995 16:10:06 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:35.995 16:10:06 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:35.995 16:10:06 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:35.995 16:10:06 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:35.995 16:10:06 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.995 16:10:06 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.995 16:10:06 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.995 16:10:06 -- paths/export.sh@5 -- # export PATH 00:19:35.995 16:10:06 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:35.995 16:10:06 -- nvmf/common.sh@46 -- # : 0 00:19:35.995 16:10:06 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:35.995 16:10:06 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:35.995 16:10:06 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:35.995 16:10:06 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:35.995 16:10:06 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:35.995 16:10:06 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:35.995 16:10:06 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:35.995 16:10:06 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:35.995 16:10:06 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:35.995 16:10:06 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:35.995 16:10:06 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:19:35.995 16:10:06 -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:19:35.995 16:10:06 -- target/multipath.sh@43 -- # nvmftestinit 00:19:35.995 16:10:06 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:35.995 16:10:06 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:35.995 16:10:06 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:35.995 16:10:06 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:35.995 16:10:06 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:35.995 16:10:06 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:35.995 16:10:06 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:35.995 16:10:06 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:35.995 16:10:06 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:35.995 16:10:06 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:35.995 16:10:06 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:35.995 16:10:06 -- common/autotest_common.sh@10 -- # set +x 00:19:42.660 16:10:12 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:42.660 16:10:12 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:42.660 16:10:12 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:42.660 16:10:12 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:42.660 16:10:12 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:42.660 16:10:12 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:42.660 16:10:12 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:42.660 16:10:12 -- nvmf/common.sh@294 -- # net_devs=() 00:19:42.660 16:10:12 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:42.660 16:10:12 -- nvmf/common.sh@295 -- # e810=() 00:19:42.660 16:10:12 -- nvmf/common.sh@295 -- # local -ga e810 00:19:42.660 16:10:12 -- nvmf/common.sh@296 -- # x722=() 00:19:42.660 16:10:12 -- nvmf/common.sh@296 -- # local -ga x722 00:19:42.660 16:10:12 -- nvmf/common.sh@297 -- # mlx=() 00:19:42.660 16:10:12 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:42.660 16:10:12 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:42.660 16:10:12 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:42.660 16:10:12 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:42.660 16:10:12 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:42.660 16:10:12 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:42.660 16:10:12 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:42.660 16:10:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.660 16:10:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:42.660 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:42.660 16:10:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:42.660 16:10:12 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:42.660 16:10:12 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:42.660 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:42.660 16:10:12 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:42.660 16:10:12 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:42.660 16:10:12 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.660 16:10:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.660 16:10:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.660 16:10:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.660 16:10:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:42.660 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:42.660 16:10:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.660 16:10:12 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:42.660 16:10:12 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:42.660 16:10:12 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:42.660 16:10:12 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:42.660 16:10:12 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:42.660 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:42.660 16:10:12 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:42.660 16:10:12 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:42.660 16:10:12 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:42.660 16:10:12 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:42.660 16:10:12 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:42.660 16:10:12 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:42.660 16:10:12 -- nvmf/common.sh@57 -- # uname 00:19:42.660 16:10:12 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:42.660 16:10:12 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:42.660 16:10:12 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:42.660 16:10:12 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:42.660 16:10:12 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:42.660 16:10:13 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:42.660 16:10:13 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:42.660 16:10:13 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:42.660 16:10:13 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:42.660 16:10:13 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:42.660 16:10:13 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:42.660 16:10:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:42.660 16:10:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:42.660 16:10:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:42.660 16:10:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:42.660 16:10:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:42.660 16:10:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:42.660 16:10:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.661 16:10:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:42.661 16:10:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:42.661 16:10:13 -- nvmf/common.sh@104 -- # continue 2 00:19:42.661 16:10:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:42.661 16:10:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.661 16:10:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:42.661 16:10:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.661 16:10:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:42.661 16:10:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:42.661 16:10:13 -- nvmf/common.sh@104 -- # continue 2 00:19:42.661 16:10:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:42.661 16:10:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:42.661 16:10:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:42.661 16:10:13 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:42.661 16:10:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:42.661 16:10:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:42.661 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:42.661 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:42.661 altname enp217s0f0np0 00:19:42.661 altname ens818f0np0 00:19:42.661 inet 192.168.100.8/24 scope global mlx_0_0 00:19:42.661 valid_lft forever preferred_lft forever 00:19:42.661 16:10:13 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:42.661 16:10:13 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:42.661 16:10:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:42.661 16:10:13 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:42.661 16:10:13 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:42.661 16:10:13 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:42.661 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:42.661 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:42.661 altname enp217s0f1np1 00:19:42.661 altname ens818f1np1 00:19:42.661 inet 192.168.100.9/24 scope global mlx_0_1 00:19:42.661 valid_lft forever preferred_lft forever 00:19:42.661 16:10:13 -- nvmf/common.sh@410 -- # return 0 00:19:42.661 16:10:13 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:42.661 16:10:13 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:42.661 16:10:13 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:42.661 16:10:13 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:42.661 16:10:13 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:42.661 16:10:13 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:42.661 16:10:13 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:42.661 16:10:13 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:42.661 16:10:13 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:42.661 16:10:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:42.661 16:10:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.661 16:10:13 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:42.661 16:10:13 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:42.661 16:10:13 -- nvmf/common.sh@104 -- # continue 2 00:19:42.661 16:10:13 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:42.661 16:10:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.661 16:10:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:42.661 16:10:13 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:42.661 16:10:13 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:42.661 16:10:13 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:42.661 16:10:13 -- nvmf/common.sh@104 -- # continue 2 00:19:42.661 16:10:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:42.661 16:10:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:42.661 16:10:13 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:42.661 16:10:13 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:42.661 16:10:13 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:42.661 16:10:13 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:42.661 16:10:13 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:42.661 16:10:13 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:42.661 192.168.100.9' 00:19:42.661 16:10:13 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:42.661 192.168.100.9' 00:19:42.661 16:10:13 -- nvmf/common.sh@445 -- # head -n 1 00:19:42.661 16:10:13 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:42.661 16:10:13 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:42.661 192.168.100.9' 00:19:42.661 16:10:13 -- nvmf/common.sh@446 -- # tail -n +2 00:19:42.661 16:10:13 -- nvmf/common.sh@446 -- # head -n 1 00:19:42.661 16:10:13 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:42.661 16:10:13 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:42.661 16:10:13 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:42.661 16:10:13 -- target/multipath.sh@45 -- # '[' -z 192.168.100.9 ']' 00:19:42.661 16:10:13 -- target/multipath.sh@51 -- # '[' rdma '!=' tcp ']' 00:19:42.661 16:10:13 -- target/multipath.sh@52 -- # echo 'run this test only with TCP transport for now' 00:19:42.661 run this test only with TCP transport for now 00:19:42.661 16:10:13 -- target/multipath.sh@53 -- # nvmftestfini 00:19:42.661 16:10:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:42.661 16:10:13 -- nvmf/common.sh@116 -- # sync 00:19:42.661 16:10:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@119 -- # set +e 00:19:42.661 16:10:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:42.661 16:10:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:42.661 rmmod nvme_rdma 00:19:42.661 rmmod nvme_fabrics 00:19:42.661 16:10:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:42.661 16:10:13 -- nvmf/common.sh@123 -- # set -e 00:19:42.661 16:10:13 -- nvmf/common.sh@124 -- # return 0 00:19:42.661 16:10:13 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:42.661 16:10:13 -- target/multipath.sh@54 -- # exit 0 00:19:42.661 16:10:13 -- target/multipath.sh@1 -- # nvmftestfini 00:19:42.661 16:10:13 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:42.661 16:10:13 -- nvmf/common.sh@116 -- # sync 00:19:42.661 16:10:13 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@119 -- # set +e 00:19:42.661 16:10:13 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:42.661 16:10:13 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:42.661 16:10:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:42.661 16:10:13 -- nvmf/common.sh@123 -- # set -e 00:19:42.661 16:10:13 -- nvmf/common.sh@124 -- # return 0 00:19:42.661 16:10:13 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:42.661 16:10:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:42.661 00:19:42.661 real 0m6.914s 00:19:42.661 user 0m1.990s 00:19:42.661 sys 0m5.129s 00:19:42.661 16:10:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:42.661 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:19:42.661 ************************************ 00:19:42.661 END TEST nvmf_multipath 00:19:42.661 ************************************ 00:19:42.661 16:10:13 -- nvmf/nvmf.sh@52 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:42.661 16:10:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:42.661 16:10:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:42.661 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:19:42.661 ************************************ 00:19:42.661 START TEST nvmf_zcopy 00:19:42.661 ************************************ 00:19:42.661 16:10:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=rdma 00:19:42.661 * Looking for test storage... 00:19:42.661 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:42.661 16:10:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:42.661 16:10:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:42.661 16:10:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:42.920 16:10:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:42.920 16:10:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:42.920 16:10:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:42.920 16:10:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:42.921 16:10:13 -- scripts/common.sh@335 -- # IFS=.-: 00:19:42.921 16:10:13 -- scripts/common.sh@335 -- # read -ra ver1 00:19:42.921 16:10:13 -- scripts/common.sh@336 -- # IFS=.-: 00:19:42.921 16:10:13 -- scripts/common.sh@336 -- # read -ra ver2 00:19:42.921 16:10:13 -- scripts/common.sh@337 -- # local 'op=<' 00:19:42.921 16:10:13 -- scripts/common.sh@339 -- # ver1_l=2 00:19:42.921 16:10:13 -- scripts/common.sh@340 -- # ver2_l=1 00:19:42.921 16:10:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:42.921 16:10:13 -- scripts/common.sh@343 -- # case "$op" in 00:19:42.921 16:10:13 -- scripts/common.sh@344 -- # : 1 00:19:42.921 16:10:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:42.921 16:10:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:42.921 16:10:13 -- scripts/common.sh@364 -- # decimal 1 00:19:42.921 16:10:13 -- scripts/common.sh@352 -- # local d=1 00:19:42.921 16:10:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:42.921 16:10:13 -- scripts/common.sh@354 -- # echo 1 00:19:42.921 16:10:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:42.921 16:10:13 -- scripts/common.sh@365 -- # decimal 2 00:19:42.921 16:10:13 -- scripts/common.sh@352 -- # local d=2 00:19:42.921 16:10:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:42.921 16:10:13 -- scripts/common.sh@354 -- # echo 2 00:19:42.921 16:10:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:42.921 16:10:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:42.921 16:10:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:42.921 16:10:13 -- scripts/common.sh@367 -- # return 0 00:19:42.921 16:10:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:42.921 16:10:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:42.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.921 --rc genhtml_branch_coverage=1 00:19:42.921 --rc genhtml_function_coverage=1 00:19:42.921 --rc genhtml_legend=1 00:19:42.921 --rc geninfo_all_blocks=1 00:19:42.921 --rc geninfo_unexecuted_blocks=1 00:19:42.921 00:19:42.921 ' 00:19:42.921 16:10:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:42.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.921 --rc genhtml_branch_coverage=1 00:19:42.921 --rc genhtml_function_coverage=1 00:19:42.921 --rc genhtml_legend=1 00:19:42.921 --rc geninfo_all_blocks=1 00:19:42.921 --rc geninfo_unexecuted_blocks=1 00:19:42.921 00:19:42.921 ' 00:19:42.921 16:10:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:42.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.921 --rc genhtml_branch_coverage=1 00:19:42.921 --rc genhtml_function_coverage=1 00:19:42.921 --rc genhtml_legend=1 00:19:42.921 --rc geninfo_all_blocks=1 00:19:42.921 --rc geninfo_unexecuted_blocks=1 00:19:42.921 00:19:42.921 ' 00:19:42.921 16:10:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:42.921 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:42.921 --rc genhtml_branch_coverage=1 00:19:42.921 --rc genhtml_function_coverage=1 00:19:42.921 --rc genhtml_legend=1 00:19:42.921 --rc geninfo_all_blocks=1 00:19:42.921 --rc geninfo_unexecuted_blocks=1 00:19:42.921 00:19:42.921 ' 00:19:42.921 16:10:13 -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:42.921 16:10:13 -- nvmf/common.sh@7 -- # uname -s 00:19:42.921 16:10:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.921 16:10:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.921 16:10:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.921 16:10:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.921 16:10:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.921 16:10:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.921 16:10:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.921 16:10:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.921 16:10:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.921 16:10:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.921 16:10:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:42.921 16:10:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:42.921 16:10:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.921 16:10:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.921 16:10:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:42.921 16:10:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:42.921 16:10:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.921 16:10:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.921 16:10:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.921 16:10:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.921 16:10:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.921 16:10:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.921 16:10:13 -- paths/export.sh@5 -- # export PATH 00:19:42.921 16:10:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.921 16:10:13 -- nvmf/common.sh@46 -- # : 0 00:19:42.921 16:10:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:42.921 16:10:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:42.921 16:10:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:42.921 16:10:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.921 16:10:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.921 16:10:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:42.921 16:10:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:42.921 16:10:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:42.921 16:10:13 -- target/zcopy.sh@12 -- # nvmftestinit 00:19:42.921 16:10:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:42.921 16:10:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.921 16:10:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:42.921 16:10:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:42.921 16:10:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:42.921 16:10:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.921 16:10:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.921 16:10:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.921 16:10:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:42.921 16:10:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:42.921 16:10:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:42.921 16:10:13 -- common/autotest_common.sh@10 -- # set +x 00:19:49.495 16:10:19 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:49.495 16:10:19 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:49.495 16:10:19 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:49.495 16:10:19 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:49.495 16:10:19 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:49.495 16:10:19 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:49.495 16:10:19 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:49.495 16:10:19 -- nvmf/common.sh@294 -- # net_devs=() 00:19:49.495 16:10:19 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:49.495 16:10:19 -- nvmf/common.sh@295 -- # e810=() 00:19:49.495 16:10:19 -- nvmf/common.sh@295 -- # local -ga e810 00:19:49.495 16:10:19 -- nvmf/common.sh@296 -- # x722=() 00:19:49.495 16:10:19 -- nvmf/common.sh@296 -- # local -ga x722 00:19:49.495 16:10:19 -- nvmf/common.sh@297 -- # mlx=() 00:19:49.495 16:10:19 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:49.495 16:10:19 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:49.495 16:10:19 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:49.495 16:10:19 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:49.495 16:10:19 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:49.495 16:10:19 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:49.495 16:10:19 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:49.495 16:10:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.495 16:10:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:49.495 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:49.495 16:10:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:49.495 16:10:19 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:49.495 16:10:19 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:49.495 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:49.495 16:10:19 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:49.495 16:10:19 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:49.495 16:10:19 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:49.495 16:10:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.495 16:10:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.495 16:10:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.495 16:10:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.495 16:10:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:49.495 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:49.495 16:10:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.495 16:10:19 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:49.495 16:10:19 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:49.496 16:10:19 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:49.496 16:10:19 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:49.496 16:10:19 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:49.496 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:49.496 16:10:19 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:49.496 16:10:19 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:49.496 16:10:19 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:49.496 16:10:19 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:49.496 16:10:19 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:49.496 16:10:19 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:49.496 16:10:19 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:49.496 16:10:19 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:49.496 16:10:19 -- nvmf/common.sh@57 -- # uname 00:19:49.496 16:10:19 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:49.496 16:10:19 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:49.496 16:10:19 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:49.496 16:10:19 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:49.496 16:10:19 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:49.496 16:10:19 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:49.496 16:10:19 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:49.496 16:10:19 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:49.496 16:10:19 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:49.496 16:10:19 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:49.496 16:10:19 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:49.496 16:10:19 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:49.496 16:10:19 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:49.496 16:10:19 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:49.496 16:10:19 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:49.496 16:10:19 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:49.496 16:10:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.496 16:10:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.496 16:10:19 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:49.496 16:10:19 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:49.496 16:10:19 -- nvmf/common.sh@104 -- # continue 2 00:19:49.496 16:10:19 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.496 16:10:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.496 16:10:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:49.496 16:10:19 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.496 16:10:19 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:49.496 16:10:19 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:49.496 16:10:19 -- nvmf/common.sh@104 -- # continue 2 00:19:49.496 16:10:19 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:49.496 16:10:19 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:49.496 16:10:19 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:49.496 16:10:19 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:49.496 16:10:19 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.496 16:10:19 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.496 16:10:19 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:49.496 16:10:19 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:49.496 16:10:19 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:49.496 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:49.496 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:49.496 altname enp217s0f0np0 00:19:49.496 altname ens818f0np0 00:19:49.496 inet 192.168.100.8/24 scope global mlx_0_0 00:19:49.496 valid_lft forever preferred_lft forever 00:19:49.496 16:10:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:49.496 16:10:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:49.496 16:10:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:49.496 16:10:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:49.496 16:10:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.496 16:10:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.496 16:10:20 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:49.496 16:10:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:49.496 16:10:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:49.496 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:49.496 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:49.496 altname enp217s0f1np1 00:19:49.496 altname ens818f1np1 00:19:49.496 inet 192.168.100.9/24 scope global mlx_0_1 00:19:49.496 valid_lft forever preferred_lft forever 00:19:49.496 16:10:20 -- nvmf/common.sh@410 -- # return 0 00:19:49.496 16:10:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:49.496 16:10:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:49.496 16:10:20 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:49.496 16:10:20 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:49.496 16:10:20 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:49.496 16:10:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:49.496 16:10:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:49.496 16:10:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:49.496 16:10:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:49.496 16:10:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:49.496 16:10:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.496 16:10:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.496 16:10:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:49.496 16:10:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:49.496 16:10:20 -- nvmf/common.sh@104 -- # continue 2 00:19:49.496 16:10:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:49.496 16:10:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.496 16:10:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:49.496 16:10:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:49.496 16:10:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:49.496 16:10:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:49.496 16:10:20 -- nvmf/common.sh@104 -- # continue 2 00:19:49.496 16:10:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:49.496 16:10:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:49.496 16:10:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:49.496 16:10:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:49.496 16:10:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.496 16:10:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.496 16:10:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:49.496 16:10:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:49.496 16:10:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:49.496 16:10:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:49.496 16:10:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:49.496 16:10:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:49.496 16:10:20 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:49.496 192.168.100.9' 00:19:49.496 16:10:20 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:49.496 192.168.100.9' 00:19:49.496 16:10:20 -- nvmf/common.sh@445 -- # head -n 1 00:19:49.496 16:10:20 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:49.496 16:10:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:49.496 192.168.100.9' 00:19:49.496 16:10:20 -- nvmf/common.sh@446 -- # tail -n +2 00:19:49.496 16:10:20 -- nvmf/common.sh@446 -- # head -n 1 00:19:49.496 16:10:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:49.496 16:10:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:49.496 16:10:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:49.496 16:10:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:49.496 16:10:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:49.496 16:10:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:49.496 16:10:20 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:19:49.496 16:10:20 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:49.496 16:10:20 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:49.496 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:19:49.496 16:10:20 -- nvmf/common.sh@469 -- # nvmfpid=1372824 00:19:49.496 16:10:20 -- nvmf/common.sh@470 -- # waitforlisten 1372824 00:19:49.496 16:10:20 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:49.496 16:10:20 -- common/autotest_common.sh@829 -- # '[' -z 1372824 ']' 00:19:49.496 16:10:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.496 16:10:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:49.496 16:10:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.496 16:10:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:49.496 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:19:49.496 [2024-11-20 16:10:20.178901] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:49.496 [2024-11-20 16:10:20.178953] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.496 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.496 [2024-11-20 16:10:20.249784] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:49.496 [2024-11-20 16:10:20.286470] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:49.496 [2024-11-20 16:10:20.286584] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.496 [2024-11-20 16:10:20.286594] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.496 [2024-11-20 16:10:20.286603] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.496 [2024-11-20 16:10:20.286628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:50.436 16:10:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:50.436 16:10:20 -- common/autotest_common.sh@862 -- # return 0 00:19:50.436 16:10:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:50.436 16:10:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:50.436 16:10:20 -- common/autotest_common.sh@10 -- # set +x 00:19:50.436 16:10:21 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:50.436 16:10:21 -- target/zcopy.sh@15 -- # '[' rdma '!=' tcp ']' 00:19:50.436 16:10:21 -- target/zcopy.sh@16 -- # echo 'Unsupported transport: rdma' 00:19:50.436 Unsupported transport: rdma 00:19:50.436 16:10:21 -- target/zcopy.sh@17 -- # exit 0 00:19:50.436 16:10:21 -- target/zcopy.sh@1 -- # process_shm --id 0 00:19:50.436 16:10:21 -- common/autotest_common.sh@806 -- # type=--id 00:19:50.436 16:10:21 -- common/autotest_common.sh@807 -- # id=0 00:19:50.436 16:10:21 -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:19:50.436 16:10:21 -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:50.436 16:10:21 -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:19:50.436 16:10:21 -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:19:50.436 16:10:21 -- common/autotest_common.sh@818 -- # for n in $shm_files 00:19:50.436 16:10:21 -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:50.436 nvmf_trace.0 00:19:50.436 16:10:21 -- common/autotest_common.sh@821 -- # return 0 00:19:50.436 16:10:21 -- target/zcopy.sh@1 -- # nvmftestfini 00:19:50.436 16:10:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:19:50.436 16:10:21 -- nvmf/common.sh@116 -- # sync 00:19:50.436 16:10:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:19:50.436 16:10:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:19:50.436 16:10:21 -- nvmf/common.sh@119 -- # set +e 00:19:50.436 16:10:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:19:50.436 16:10:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:19:50.436 rmmod nvme_rdma 00:19:50.436 rmmod nvme_fabrics 00:19:50.436 16:10:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:19:50.436 16:10:21 -- nvmf/common.sh@123 -- # set -e 00:19:50.436 16:10:21 -- nvmf/common.sh@124 -- # return 0 00:19:50.436 16:10:21 -- nvmf/common.sh@477 -- # '[' -n 1372824 ']' 00:19:50.436 16:10:21 -- nvmf/common.sh@478 -- # killprocess 1372824 00:19:50.436 16:10:21 -- common/autotest_common.sh@936 -- # '[' -z 1372824 ']' 00:19:50.436 16:10:21 -- common/autotest_common.sh@940 -- # kill -0 1372824 00:19:50.436 16:10:21 -- common/autotest_common.sh@941 -- # uname 00:19:50.436 16:10:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:50.436 16:10:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1372824 00:19:50.436 16:10:21 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:19:50.436 16:10:21 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:19:50.436 16:10:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1372824' 00:19:50.436 killing process with pid 1372824 00:19:50.436 16:10:21 -- common/autotest_common.sh@955 -- # kill 1372824 00:19:50.436 16:10:21 -- common/autotest_common.sh@960 -- # wait 1372824 00:19:50.696 16:10:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:19:50.696 16:10:21 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:19:50.696 00:19:50.696 real 0m8.017s 00:19:50.696 user 0m3.407s 00:19:50.696 sys 0m5.372s 00:19:50.696 16:10:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:19:50.696 16:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:50.696 ************************************ 00:19:50.696 END TEST nvmf_zcopy 00:19:50.696 ************************************ 00:19:50.696 16:10:21 -- nvmf/nvmf.sh@53 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:50.696 16:10:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:50.696 16:10:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:50.696 16:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:50.696 ************************************ 00:19:50.696 START TEST nvmf_nmic 00:19:50.696 ************************************ 00:19:50.696 16:10:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=rdma 00:19:50.696 * Looking for test storage... 00:19:50.696 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:19:50.696 16:10:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:19:50.956 16:10:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:19:50.956 16:10:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:19:50.956 16:10:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:19:50.956 16:10:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:19:50.956 16:10:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:19:50.956 16:10:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:19:50.956 16:10:21 -- scripts/common.sh@335 -- # IFS=.-: 00:19:50.956 16:10:21 -- scripts/common.sh@335 -- # read -ra ver1 00:19:50.956 16:10:21 -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.956 16:10:21 -- scripts/common.sh@336 -- # read -ra ver2 00:19:50.956 16:10:21 -- scripts/common.sh@337 -- # local 'op=<' 00:19:50.956 16:10:21 -- scripts/common.sh@339 -- # ver1_l=2 00:19:50.956 16:10:21 -- scripts/common.sh@340 -- # ver2_l=1 00:19:50.956 16:10:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:19:50.956 16:10:21 -- scripts/common.sh@343 -- # case "$op" in 00:19:50.956 16:10:21 -- scripts/common.sh@344 -- # : 1 00:19:50.956 16:10:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:19:50.956 16:10:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.956 16:10:21 -- scripts/common.sh@364 -- # decimal 1 00:19:50.956 16:10:21 -- scripts/common.sh@352 -- # local d=1 00:19:50.956 16:10:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.956 16:10:21 -- scripts/common.sh@354 -- # echo 1 00:19:50.956 16:10:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:19:50.956 16:10:21 -- scripts/common.sh@365 -- # decimal 2 00:19:50.956 16:10:21 -- scripts/common.sh@352 -- # local d=2 00:19:50.956 16:10:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.956 16:10:21 -- scripts/common.sh@354 -- # echo 2 00:19:50.956 16:10:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:19:50.956 16:10:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:19:50.956 16:10:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:19:50.956 16:10:21 -- scripts/common.sh@367 -- # return 0 00:19:50.956 16:10:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.956 16:10:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:19:50.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.956 --rc genhtml_branch_coverage=1 00:19:50.956 --rc genhtml_function_coverage=1 00:19:50.956 --rc genhtml_legend=1 00:19:50.956 --rc geninfo_all_blocks=1 00:19:50.956 --rc geninfo_unexecuted_blocks=1 00:19:50.956 00:19:50.956 ' 00:19:50.956 16:10:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:19:50.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.956 --rc genhtml_branch_coverage=1 00:19:50.956 --rc genhtml_function_coverage=1 00:19:50.956 --rc genhtml_legend=1 00:19:50.956 --rc geninfo_all_blocks=1 00:19:50.956 --rc geninfo_unexecuted_blocks=1 00:19:50.956 00:19:50.956 ' 00:19:50.956 16:10:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:19:50.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.956 --rc genhtml_branch_coverage=1 00:19:50.956 --rc genhtml_function_coverage=1 00:19:50.956 --rc genhtml_legend=1 00:19:50.956 --rc geninfo_all_blocks=1 00:19:50.956 --rc geninfo_unexecuted_blocks=1 00:19:50.956 00:19:50.956 ' 00:19:50.956 16:10:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:19:50.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.956 --rc genhtml_branch_coverage=1 00:19:50.956 --rc genhtml_function_coverage=1 00:19:50.956 --rc genhtml_legend=1 00:19:50.956 --rc geninfo_all_blocks=1 00:19:50.956 --rc geninfo_unexecuted_blocks=1 00:19:50.956 00:19:50.956 ' 00:19:50.956 16:10:21 -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:19:50.956 16:10:21 -- nvmf/common.sh@7 -- # uname -s 00:19:50.956 16:10:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:50.956 16:10:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:50.956 16:10:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:50.956 16:10:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:50.956 16:10:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:50.956 16:10:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:50.956 16:10:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:50.956 16:10:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:50.956 16:10:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:50.956 16:10:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:50.956 16:10:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:19:50.956 16:10:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:19:50.956 16:10:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:50.956 16:10:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:50.956 16:10:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:50.956 16:10:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:19:50.956 16:10:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:50.956 16:10:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:50.956 16:10:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:50.956 16:10:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.956 16:10:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.956 16:10:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.956 16:10:21 -- paths/export.sh@5 -- # export PATH 00:19:50.956 16:10:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:50.956 16:10:21 -- nvmf/common.sh@46 -- # : 0 00:19:50.956 16:10:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:19:50.956 16:10:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:19:50.956 16:10:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:19:50.956 16:10:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:50.956 16:10:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:50.956 16:10:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:19:50.956 16:10:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:19:50.956 16:10:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:19:50.956 16:10:21 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:50.956 16:10:21 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:50.956 16:10:21 -- target/nmic.sh@14 -- # nvmftestinit 00:19:50.956 16:10:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:19:50.956 16:10:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:50.956 16:10:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:19:50.956 16:10:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:19:50.956 16:10:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:19:50.956 16:10:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:50.956 16:10:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:50.956 16:10:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:50.956 16:10:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:19:50.956 16:10:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:19:50.956 16:10:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:19:50.956 16:10:21 -- common/autotest_common.sh@10 -- # set +x 00:19:57.531 16:10:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:19:57.531 16:10:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:19:57.531 16:10:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:19:57.531 16:10:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:19:57.531 16:10:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:19:57.531 16:10:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:19:57.531 16:10:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:19:57.531 16:10:28 -- nvmf/common.sh@294 -- # net_devs=() 00:19:57.531 16:10:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:19:57.531 16:10:28 -- nvmf/common.sh@295 -- # e810=() 00:19:57.531 16:10:28 -- nvmf/common.sh@295 -- # local -ga e810 00:19:57.531 16:10:28 -- nvmf/common.sh@296 -- # x722=() 00:19:57.531 16:10:28 -- nvmf/common.sh@296 -- # local -ga x722 00:19:57.531 16:10:28 -- nvmf/common.sh@297 -- # mlx=() 00:19:57.531 16:10:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:19:57.531 16:10:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:57.531 16:10:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:19:57.531 16:10:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:19:57.531 16:10:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:19:57.531 16:10:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:19:57.531 16:10:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:19:57.531 16:10:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:19:57.531 16:10:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:19:57.531 16:10:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:57.531 16:10:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:19:57.531 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:19:57.531 16:10:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:57.531 16:10:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:57.531 16:10:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:57.532 16:10:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:19:57.532 16:10:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:19:57.532 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:19:57.532 16:10:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:19:57.532 16:10:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:19:57.532 16:10:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:57.532 16:10:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.532 16:10:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:57.532 16:10:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.532 16:10:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:19:57.532 Found net devices under 0000:d9:00.0: mlx_0_0 00:19:57.532 16:10:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.532 16:10:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:19:57.532 16:10:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:57.532 16:10:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:19:57.532 16:10:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:57.532 16:10:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:19:57.532 Found net devices under 0000:d9:00.1: mlx_0_1 00:19:57.532 16:10:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:19:57.532 16:10:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:19:57.532 16:10:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:19:57.532 16:10:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:19:57.532 16:10:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:19:57.532 16:10:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:19:57.532 16:10:28 -- nvmf/common.sh@57 -- # uname 00:19:57.532 16:10:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:19:57.532 16:10:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:19:57.532 16:10:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:19:57.532 16:10:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:19:57.532 16:10:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:19:57.532 16:10:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:19:57.532 16:10:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:19:57.532 16:10:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:19:57.532 16:10:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:19:57.791 16:10:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:19:57.791 16:10:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:19:57.791 16:10:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:57.791 16:10:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:57.791 16:10:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:57.791 16:10:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:57.791 16:10:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:57.791 16:10:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:57.791 16:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.791 16:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:57.791 16:10:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:57.791 16:10:28 -- nvmf/common.sh@104 -- # continue 2 00:19:57.791 16:10:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:57.791 16:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.791 16:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:57.791 16:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.791 16:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:57.791 16:10:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:57.791 16:10:28 -- nvmf/common.sh@104 -- # continue 2 00:19:57.791 16:10:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:57.791 16:10:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:19:57.791 16:10:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:57.791 16:10:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:57.791 16:10:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:57.791 16:10:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:57.791 16:10:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:19:57.791 16:10:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:19:57.791 16:10:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:19:57.791 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:57.791 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:19:57.791 altname enp217s0f0np0 00:19:57.791 altname ens818f0np0 00:19:57.791 inet 192.168.100.8/24 scope global mlx_0_0 00:19:57.791 valid_lft forever preferred_lft forever 00:19:57.791 16:10:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:19:57.791 16:10:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:19:57.791 16:10:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:57.791 16:10:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:57.791 16:10:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:57.791 16:10:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:57.791 16:10:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:19:57.791 16:10:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:19:57.791 16:10:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:19:57.791 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:19:57.791 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:19:57.791 altname enp217s0f1np1 00:19:57.791 altname ens818f1np1 00:19:57.791 inet 192.168.100.9/24 scope global mlx_0_1 00:19:57.791 valid_lft forever preferred_lft forever 00:19:57.791 16:10:28 -- nvmf/common.sh@410 -- # return 0 00:19:57.791 16:10:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:19:57.791 16:10:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:19:57.791 16:10:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:19:57.791 16:10:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:19:57.791 16:10:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:19:57.791 16:10:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:19:57.791 16:10:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:19:57.791 16:10:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:19:57.791 16:10:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:19:57.791 16:10:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:19:57.791 16:10:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:57.791 16:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.791 16:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:19:57.791 16:10:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:19:57.791 16:10:28 -- nvmf/common.sh@104 -- # continue 2 00:19:57.791 16:10:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:19:57.791 16:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.791 16:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:19:57.791 16:10:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:19:57.792 16:10:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:19:57.792 16:10:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:19:57.792 16:10:28 -- nvmf/common.sh@104 -- # continue 2 00:19:57.792 16:10:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:57.792 16:10:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:19:57.792 16:10:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:19:57.792 16:10:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:19:57.792 16:10:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:57.792 16:10:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:57.792 16:10:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:19:57.792 16:10:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:19:57.792 16:10:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:19:57.792 16:10:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:19:57.792 16:10:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:19:57.792 16:10:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:19:57.792 16:10:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:19:57.792 192.168.100.9' 00:19:57.792 16:10:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:19:57.792 192.168.100.9' 00:19:57.792 16:10:28 -- nvmf/common.sh@445 -- # head -n 1 00:19:57.792 16:10:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:19:57.792 16:10:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:19:57.792 192.168.100.9' 00:19:57.792 16:10:28 -- nvmf/common.sh@446 -- # tail -n +2 00:19:57.792 16:10:28 -- nvmf/common.sh@446 -- # head -n 1 00:19:57.792 16:10:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:19:57.792 16:10:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:19:57.792 16:10:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:19:57.792 16:10:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:19:57.792 16:10:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:19:57.792 16:10:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:19:57.792 16:10:28 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:19:57.792 16:10:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:19:57.792 16:10:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.792 16:10:28 -- common/autotest_common.sh@10 -- # set +x 00:19:57.792 16:10:28 -- nvmf/common.sh@469 -- # nvmfpid=1376298 00:19:57.792 16:10:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:57.792 16:10:28 -- nvmf/common.sh@470 -- # waitforlisten 1376298 00:19:57.792 16:10:28 -- common/autotest_common.sh@829 -- # '[' -z 1376298 ']' 00:19:57.792 16:10:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.792 16:10:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.792 16:10:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.792 16:10:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.792 16:10:28 -- common/autotest_common.sh@10 -- # set +x 00:19:57.792 [2024-11-20 16:10:28.569769] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:19:57.792 [2024-11-20 16:10:28.569820] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:58.051 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.051 [2024-11-20 16:10:28.643258] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:58.051 [2024-11-20 16:10:28.682408] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:19:58.051 [2024-11-20 16:10:28.682524] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:58.051 [2024-11-20 16:10:28.682535] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:58.051 [2024-11-20 16:10:28.682544] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:58.051 [2024-11-20 16:10:28.682597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.051 [2024-11-20 16:10:28.682619] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.051 [2024-11-20 16:10:28.682703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:58.051 [2024-11-20 16:10:28.682704] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.619 16:10:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.619 16:10:29 -- common/autotest_common.sh@862 -- # return 0 00:19:58.619 16:10:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:19:58.619 16:10:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.619 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.878 16:10:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.878 16:10:29 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:19:58.878 16:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.878 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.878 [2024-11-20 16:10:29.472014] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x5b20d0/0x5b65a0) succeed. 00:19:58.878 [2024-11-20 16:10:29.481253] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x5b3670/0x5f7c40) succeed. 00:19:58.878 16:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.878 16:10:29 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:58.878 16:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.878 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.878 Malloc0 00:19:58.878 16:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.878 16:10:29 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:58.878 16:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.878 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.878 16:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.878 16:10:29 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:58.878 16:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.878 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.878 16:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.878 16:10:29 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:19:58.878 16:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.878 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.878 [2024-11-20 16:10:29.652031] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:19:58.878 16:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.878 16:10:29 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:19:58.878 test case1: single bdev can't be used in multiple subsystems 00:19:58.878 16:10:29 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:19:58.878 16:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.878 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.878 16:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.878 16:10:29 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:19:58.878 16:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.878 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.878 16:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.878 16:10:29 -- target/nmic.sh@28 -- # nmic_status=0 00:19:58.878 16:10:29 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:19:58.878 16:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.878 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:58.878 [2024-11-20 16:10:29.675780] bdev.c:7940:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:19:58.878 [2024-11-20 16:10:29.675801] subsystem.c:1819:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:19:58.878 [2024-11-20 16:10:29.675810] nvmf_rpc.c:1513:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:19:58.878 request: 00:19:58.878 { 00:19:58.878 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:58.878 "namespace": { 00:19:58.878 "bdev_name": "Malloc0" 00:19:58.878 }, 00:19:58.878 "method": "nvmf_subsystem_add_ns", 00:19:58.878 "req_id": 1 00:19:58.878 } 00:19:58.878 Got JSON-RPC error response 00:19:58.878 response: 00:19:58.878 { 00:19:59.138 "code": -32602, 00:19:59.138 "message": "Invalid parameters" 00:19:59.138 } 00:19:59.138 16:10:29 -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:59.138 16:10:29 -- target/nmic.sh@29 -- # nmic_status=1 00:19:59.138 16:10:29 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:19:59.138 16:10:29 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:19:59.138 Adding namespace failed - expected result. 00:19:59.138 16:10:29 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:19:59.138 test case2: host connect to nvmf target in multiple paths 00:19:59.138 16:10:29 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:19:59.138 16:10:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.138 16:10:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.138 [2024-11-20 16:10:29.687836] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:19:59.138 16:10:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.138 16:10:29 -- target/nmic.sh@41 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:00.076 16:10:30 -- target/nmic.sh@42 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4421 00:20:01.014 16:10:31 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:20:01.014 16:10:31 -- common/autotest_common.sh@1187 -- # local i=0 00:20:01.014 16:10:31 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:01.014 16:10:31 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:20:01.014 16:10:31 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:02.920 16:10:33 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:02.920 16:10:33 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:02.920 16:10:33 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:02.920 16:10:33 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:20:02.920 16:10:33 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:02.920 16:10:33 -- common/autotest_common.sh@1197 -- # return 0 00:20:02.920 16:10:33 -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:02.920 [global] 00:20:02.920 thread=1 00:20:02.920 invalidate=1 00:20:02.920 rw=write 00:20:02.920 time_based=1 00:20:02.920 runtime=1 00:20:02.920 ioengine=libaio 00:20:02.920 direct=1 00:20:02.920 bs=4096 00:20:02.920 iodepth=1 00:20:02.920 norandommap=0 00:20:02.920 numjobs=1 00:20:02.920 00:20:02.920 verify_dump=1 00:20:02.920 verify_backlog=512 00:20:02.920 verify_state_save=0 00:20:02.920 do_verify=1 00:20:02.920 verify=crc32c-intel 00:20:03.205 [job0] 00:20:03.205 filename=/dev/nvme0n1 00:20:03.205 Could not set queue depth (nvme0n1) 00:20:03.464 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:03.464 fio-3.35 00:20:03.464 Starting 1 thread 00:20:04.427 00:20:04.427 job0: (groupid=0, jobs=1): err= 0: pid=1377500: Wed Nov 20 16:10:35 2024 00:20:04.427 read: IOPS=7160, BW=28.0MiB/s (29.3MB/s)(28.0MiB/1001msec) 00:20:04.427 slat (nsec): min=8195, max=31581, avg=8724.88, stdev=787.23 00:20:04.427 clat (usec): min=37, max=112, avg=58.04, stdev= 3.56 00:20:04.427 lat (usec): min=57, max=121, avg=66.77, stdev= 3.59 00:20:04.427 clat percentiles (usec): 00:20:04.427 | 1.00th=[ 52], 5.00th=[ 53], 10.00th=[ 54], 20.00th=[ 56], 00:20:04.427 | 30.00th=[ 57], 40.00th=[ 58], 50.00th=[ 58], 60.00th=[ 59], 00:20:04.427 | 70.00th=[ 60], 80.00th=[ 62], 90.00th=[ 63], 95.00th=[ 64], 00:20:04.427 | 99.00th=[ 68], 99.50th=[ 69], 99.90th=[ 73], 99.95th=[ 75], 00:20:04.427 | 99.99th=[ 114] 00:20:04.427 write: IOPS=7212, BW=28.2MiB/s (29.5MB/s)(28.2MiB/1001msec); 0 zone resets 00:20:04.427 slat (nsec): min=10642, max=45713, avg=11357.69, stdev=1063.28 00:20:04.427 clat (nsec): min=38097, max=86606, avg=55702.04, stdev=3601.18 00:20:04.427 lat (usec): min=57, max=132, avg=67.06, stdev= 3.74 00:20:04.427 clat percentiles (nsec): 00:20:04.427 | 1.00th=[48896], 5.00th=[50432], 10.00th=[51456], 20.00th=[52480], 00:20:04.427 | 30.00th=[53504], 40.00th=[54528], 50.00th=[55552], 60.00th=[56576], 00:20:04.427 | 70.00th=[57600], 80.00th=[58624], 90.00th=[60672], 95.00th=[61696], 00:20:04.427 | 99.00th=[64768], 99.50th=[66048], 99.90th=[69120], 99.95th=[74240], 00:20:04.427 | 99.99th=[86528] 00:20:04.427 bw ( KiB/s): min=28912, max=28912, per=100.00%, avg=28912.00, stdev= 0.00, samples=1 00:20:04.427 iops : min= 7228, max= 7228, avg=7228.00, stdev= 0.00, samples=1 00:20:04.427 lat (usec) : 50=2.15%, 100=97.84%, 250=0.01% 00:20:04.427 cpu : usr=11.40%, sys=18.70%, ctx=14388, majf=0, minf=1 00:20:04.427 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:04.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:04.427 issued rwts: total=7168,7220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:04.427 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:04.427 00:20:04.427 Run status group 0 (all jobs): 00:20:04.427 READ: bw=28.0MiB/s (29.3MB/s), 28.0MiB/s-28.0MiB/s (29.3MB/s-29.3MB/s), io=28.0MiB (29.4MB), run=1001-1001msec 00:20:04.427 WRITE: bw=28.2MiB/s (29.5MB/s), 28.2MiB/s-28.2MiB/s (29.5MB/s-29.5MB/s), io=28.2MiB (29.6MB), run=1001-1001msec 00:20:04.427 00:20:04.427 Disk stats (read/write): 00:20:04.427 nvme0n1: ios=6322/6656, merge=0/0, ticks=323/309, in_queue=632, util=90.58% 00:20:04.427 16:10:35 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:06.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:20:06.328 16:10:37 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:06.328 16:10:37 -- common/autotest_common.sh@1208 -- # local i=0 00:20:06.328 16:10:37 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:06.328 16:10:37 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.328 16:10:37 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:06.328 16:10:37 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.328 16:10:37 -- common/autotest_common.sh@1220 -- # return 0 00:20:06.328 16:10:37 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:20:06.328 16:10:37 -- target/nmic.sh@53 -- # nvmftestfini 00:20:06.328 16:10:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:06.328 16:10:37 -- nvmf/common.sh@116 -- # sync 00:20:06.328 16:10:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:06.328 16:10:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:06.328 16:10:37 -- nvmf/common.sh@119 -- # set +e 00:20:06.328 16:10:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:06.328 16:10:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:06.328 rmmod nvme_rdma 00:20:06.588 rmmod nvme_fabrics 00:20:06.588 16:10:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:06.588 16:10:37 -- nvmf/common.sh@123 -- # set -e 00:20:06.588 16:10:37 -- nvmf/common.sh@124 -- # return 0 00:20:06.588 16:10:37 -- nvmf/common.sh@477 -- # '[' -n 1376298 ']' 00:20:06.588 16:10:37 -- nvmf/common.sh@478 -- # killprocess 1376298 00:20:06.588 16:10:37 -- common/autotest_common.sh@936 -- # '[' -z 1376298 ']' 00:20:06.588 16:10:37 -- common/autotest_common.sh@940 -- # kill -0 1376298 00:20:06.588 16:10:37 -- common/autotest_common.sh@941 -- # uname 00:20:06.588 16:10:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:06.588 16:10:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1376298 00:20:06.588 16:10:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:06.588 16:10:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:06.588 16:10:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1376298' 00:20:06.588 killing process with pid 1376298 00:20:06.588 16:10:37 -- common/autotest_common.sh@955 -- # kill 1376298 00:20:06.588 16:10:37 -- common/autotest_common.sh@960 -- # wait 1376298 00:20:06.855 16:10:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:06.855 16:10:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:06.855 00:20:06.855 real 0m16.107s 00:20:06.855 user 0m45.335s 00:20:06.855 sys 0m6.309s 00:20:06.855 16:10:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:06.855 16:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:06.855 ************************************ 00:20:06.855 END TEST nvmf_nmic 00:20:06.855 ************************************ 00:20:06.855 16:10:37 -- nvmf/nvmf.sh@54 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:20:06.855 16:10:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:06.855 16:10:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:06.855 16:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:06.855 ************************************ 00:20:06.855 START TEST nvmf_fio_target 00:20:06.855 ************************************ 00:20:06.855 16:10:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=rdma 00:20:06.855 * Looking for test storage... 00:20:06.855 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:06.855 16:10:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:07.115 16:10:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:07.115 16:10:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:07.115 16:10:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:07.115 16:10:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:07.115 16:10:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:07.115 16:10:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:07.115 16:10:37 -- scripts/common.sh@335 -- # IFS=.-: 00:20:07.115 16:10:37 -- scripts/common.sh@335 -- # read -ra ver1 00:20:07.115 16:10:37 -- scripts/common.sh@336 -- # IFS=.-: 00:20:07.115 16:10:37 -- scripts/common.sh@336 -- # read -ra ver2 00:20:07.115 16:10:37 -- scripts/common.sh@337 -- # local 'op=<' 00:20:07.115 16:10:37 -- scripts/common.sh@339 -- # ver1_l=2 00:20:07.115 16:10:37 -- scripts/common.sh@340 -- # ver2_l=1 00:20:07.115 16:10:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:07.115 16:10:37 -- scripts/common.sh@343 -- # case "$op" in 00:20:07.115 16:10:37 -- scripts/common.sh@344 -- # : 1 00:20:07.115 16:10:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:07.115 16:10:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:07.115 16:10:37 -- scripts/common.sh@364 -- # decimal 1 00:20:07.115 16:10:37 -- scripts/common.sh@352 -- # local d=1 00:20:07.115 16:10:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:07.115 16:10:37 -- scripts/common.sh@354 -- # echo 1 00:20:07.115 16:10:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:07.115 16:10:37 -- scripts/common.sh@365 -- # decimal 2 00:20:07.115 16:10:37 -- scripts/common.sh@352 -- # local d=2 00:20:07.115 16:10:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:07.115 16:10:37 -- scripts/common.sh@354 -- # echo 2 00:20:07.115 16:10:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:07.115 16:10:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:07.115 16:10:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:07.115 16:10:37 -- scripts/common.sh@367 -- # return 0 00:20:07.115 16:10:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:07.115 16:10:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:07.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.115 --rc genhtml_branch_coverage=1 00:20:07.115 --rc genhtml_function_coverage=1 00:20:07.115 --rc genhtml_legend=1 00:20:07.115 --rc geninfo_all_blocks=1 00:20:07.115 --rc geninfo_unexecuted_blocks=1 00:20:07.115 00:20:07.115 ' 00:20:07.115 16:10:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:07.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.115 --rc genhtml_branch_coverage=1 00:20:07.115 --rc genhtml_function_coverage=1 00:20:07.115 --rc genhtml_legend=1 00:20:07.115 --rc geninfo_all_blocks=1 00:20:07.115 --rc geninfo_unexecuted_blocks=1 00:20:07.115 00:20:07.115 ' 00:20:07.115 16:10:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:07.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.115 --rc genhtml_branch_coverage=1 00:20:07.115 --rc genhtml_function_coverage=1 00:20:07.115 --rc genhtml_legend=1 00:20:07.115 --rc geninfo_all_blocks=1 00:20:07.115 --rc geninfo_unexecuted_blocks=1 00:20:07.115 00:20:07.115 ' 00:20:07.115 16:10:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:07.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:07.115 --rc genhtml_branch_coverage=1 00:20:07.115 --rc genhtml_function_coverage=1 00:20:07.115 --rc genhtml_legend=1 00:20:07.115 --rc geninfo_all_blocks=1 00:20:07.115 --rc geninfo_unexecuted_blocks=1 00:20:07.115 00:20:07.115 ' 00:20:07.115 16:10:37 -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.115 16:10:37 -- nvmf/common.sh@7 -- # uname -s 00:20:07.115 16:10:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.115 16:10:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.115 16:10:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.115 16:10:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.115 16:10:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.115 16:10:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.115 16:10:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.115 16:10:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.115 16:10:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.115 16:10:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.115 16:10:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:07.115 16:10:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:07.115 16:10:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.115 16:10:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.115 16:10:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.115 16:10:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:07.115 16:10:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.115 16:10:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.115 16:10:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.115 16:10:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.115 16:10:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.115 16:10:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.115 16:10:37 -- paths/export.sh@5 -- # export PATH 00:20:07.115 16:10:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.115 16:10:37 -- nvmf/common.sh@46 -- # : 0 00:20:07.115 16:10:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:07.115 16:10:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:07.115 16:10:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:07.115 16:10:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.115 16:10:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.115 16:10:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:07.115 16:10:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:07.115 16:10:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:07.115 16:10:37 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.115 16:10:37 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.115 16:10:37 -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:20:07.115 16:10:37 -- target/fio.sh@16 -- # nvmftestinit 00:20:07.115 16:10:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:07.115 16:10:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.115 16:10:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:07.115 16:10:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:07.115 16:10:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:07.115 16:10:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.115 16:10:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:07.115 16:10:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.115 16:10:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:07.115 16:10:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:07.115 16:10:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:07.115 16:10:37 -- common/autotest_common.sh@10 -- # set +x 00:20:13.689 16:10:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:13.689 16:10:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:13.689 16:10:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:13.689 16:10:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:13.689 16:10:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:13.689 16:10:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:13.689 16:10:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:13.689 16:10:43 -- nvmf/common.sh@294 -- # net_devs=() 00:20:13.689 16:10:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:13.689 16:10:43 -- nvmf/common.sh@295 -- # e810=() 00:20:13.689 16:10:43 -- nvmf/common.sh@295 -- # local -ga e810 00:20:13.689 16:10:43 -- nvmf/common.sh@296 -- # x722=() 00:20:13.689 16:10:43 -- nvmf/common.sh@296 -- # local -ga x722 00:20:13.689 16:10:43 -- nvmf/common.sh@297 -- # mlx=() 00:20:13.689 16:10:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:13.689 16:10:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:13.689 16:10:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:13.689 16:10:43 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:13.689 16:10:43 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:13.689 16:10:43 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:13.689 16:10:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:13.689 16:10:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:13.689 16:10:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:13.689 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:13.689 16:10:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:13.689 16:10:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:13.689 16:10:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:13.689 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:13.689 16:10:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:13.689 16:10:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:13.689 16:10:43 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:13.689 16:10:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.689 16:10:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:13.689 16:10:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.689 16:10:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:13.689 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:13.689 16:10:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.689 16:10:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:13.689 16:10:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:13.689 16:10:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:13.689 16:10:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:13.689 16:10:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:13.689 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:13.689 16:10:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:13.689 16:10:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:13.689 16:10:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:13.689 16:10:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:13.689 16:10:43 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:13.689 16:10:43 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:13.689 16:10:43 -- nvmf/common.sh@57 -- # uname 00:20:13.689 16:10:43 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:13.689 16:10:43 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:13.689 16:10:43 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:13.689 16:10:43 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:13.689 16:10:43 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:13.689 16:10:43 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:13.689 16:10:43 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:13.689 16:10:43 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:13.689 16:10:43 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:13.689 16:10:43 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:13.689 16:10:43 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:13.689 16:10:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:13.689 16:10:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:13.690 16:10:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:13.690 16:10:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:13.690 16:10:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:13.690 16:10:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:13.690 16:10:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:13.690 16:10:43 -- nvmf/common.sh@104 -- # continue 2 00:20:13.690 16:10:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:13.690 16:10:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:13.690 16:10:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:13.690 16:10:43 -- nvmf/common.sh@104 -- # continue 2 00:20:13.690 16:10:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:13.690 16:10:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:13.690 16:10:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:13.690 16:10:43 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:13.690 16:10:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:13.690 16:10:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:13.690 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:13.690 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:13.690 altname enp217s0f0np0 00:20:13.690 altname ens818f0np0 00:20:13.690 inet 192.168.100.8/24 scope global mlx_0_0 00:20:13.690 valid_lft forever preferred_lft forever 00:20:13.690 16:10:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:13.690 16:10:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:13.690 16:10:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:13.690 16:10:43 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:13.690 16:10:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:13.690 16:10:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:13.690 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:13.690 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:13.690 altname enp217s0f1np1 00:20:13.690 altname ens818f1np1 00:20:13.690 inet 192.168.100.9/24 scope global mlx_0_1 00:20:13.690 valid_lft forever preferred_lft forever 00:20:13.690 16:10:43 -- nvmf/common.sh@410 -- # return 0 00:20:13.690 16:10:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:13.690 16:10:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:13.690 16:10:43 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:13.690 16:10:43 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:13.690 16:10:43 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:13.690 16:10:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:13.690 16:10:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:13.690 16:10:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:13.690 16:10:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:13.690 16:10:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:13.690 16:10:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:13.690 16:10:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:13.690 16:10:43 -- nvmf/common.sh@104 -- # continue 2 00:20:13.690 16:10:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:13.690 16:10:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:13.690 16:10:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:13.690 16:10:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:13.690 16:10:43 -- nvmf/common.sh@104 -- # continue 2 00:20:13.690 16:10:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:13.690 16:10:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:13.690 16:10:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:13.690 16:10:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:13.690 16:10:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:13.690 16:10:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:13.690 16:10:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:13.690 16:10:43 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:13.690 192.168.100.9' 00:20:13.690 16:10:43 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:13.690 192.168.100.9' 00:20:13.690 16:10:43 -- nvmf/common.sh@445 -- # head -n 1 00:20:13.690 16:10:43 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:13.690 16:10:43 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:13.690 192.168.100.9' 00:20:13.690 16:10:43 -- nvmf/common.sh@446 -- # tail -n +2 00:20:13.690 16:10:43 -- nvmf/common.sh@446 -- # head -n 1 00:20:13.690 16:10:43 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:13.690 16:10:43 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:13.690 16:10:43 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:13.690 16:10:43 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:13.690 16:10:43 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:13.690 16:10:43 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:13.690 16:10:43 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:20:13.690 16:10:43 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:13.690 16:10:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:13.690 16:10:43 -- common/autotest_common.sh@10 -- # set +x 00:20:13.690 16:10:43 -- nvmf/common.sh@469 -- # nvmfpid=1381271 00:20:13.690 16:10:43 -- nvmf/common.sh@470 -- # waitforlisten 1381271 00:20:13.690 16:10:43 -- common/autotest_common.sh@829 -- # '[' -z 1381271 ']' 00:20:13.690 16:10:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.690 16:10:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:13.690 16:10:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.690 16:10:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:13.690 16:10:43 -- common/autotest_common.sh@10 -- # set +x 00:20:13.690 16:10:43 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:13.690 [2024-11-20 16:10:43.995308] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:13.690 [2024-11-20 16:10:43.995360] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:13.690 EAL: No free 2048 kB hugepages reported on node 1 00:20:13.690 [2024-11-20 16:10:44.065752] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:13.690 [2024-11-20 16:10:44.103564] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:13.690 [2024-11-20 16:10:44.103671] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:13.691 [2024-11-20 16:10:44.103681] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:13.691 [2024-11-20 16:10:44.103689] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:13.691 [2024-11-20 16:10:44.103734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:13.691 [2024-11-20 16:10:44.103829] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:13.691 [2024-11-20 16:10:44.103919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:13.691 [2024-11-20 16:10:44.103921] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.260 16:10:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:14.260 16:10:44 -- common/autotest_common.sh@862 -- # return 0 00:20:14.260 16:10:44 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:14.260 16:10:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:14.260 16:10:44 -- common/autotest_common.sh@10 -- # set +x 00:20:14.260 16:10:44 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.260 16:10:44 -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:14.260 [2024-11-20 16:10:45.050048] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x16c10d0/0x16c55a0) succeed. 00:20:14.260 [2024-11-20 16:10:45.059252] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x16c2670/0x1706c40) succeed. 00:20:14.519 16:10:45 -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:14.779 16:10:45 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:20:14.779 16:10:45 -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:15.038 16:10:45 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:20:15.038 16:10:45 -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:15.038 16:10:45 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:20:15.038 16:10:45 -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:15.298 16:10:46 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:20:15.298 16:10:46 -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:20:15.558 16:10:46 -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:15.817 16:10:46 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:20:15.817 16:10:46 -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:15.817 16:10:46 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:20:15.817 16:10:46 -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:16.076 16:10:46 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:20:16.076 16:10:46 -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:20:16.336 16:10:46 -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:16.595 16:10:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:16.595 16:10:47 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:16.595 16:10:47 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:20:16.595 16:10:47 -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:16.854 16:10:47 -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:17.114 [2024-11-20 16:10:47.734785] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:17.114 16:10:47 -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:20:17.374 16:10:47 -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:20:17.374 16:10:48 -- target/fio.sh@46 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:20:18.753 16:10:49 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:20:18.753 16:10:49 -- common/autotest_common.sh@1187 -- # local i=0 00:20:18.753 16:10:49 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:20:18.753 16:10:49 -- common/autotest_common.sh@1189 -- # [[ -n 4 ]] 00:20:18.753 16:10:49 -- common/autotest_common.sh@1190 -- # nvme_device_counter=4 00:20:18.753 16:10:49 -- common/autotest_common.sh@1194 -- # sleep 2 00:20:20.726 16:10:51 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:20:20.726 16:10:51 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:20:20.726 16:10:51 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:20:20.726 16:10:51 -- common/autotest_common.sh@1196 -- # nvme_devices=4 00:20:20.726 16:10:51 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:20:20.726 16:10:51 -- common/autotest_common.sh@1197 -- # return 0 00:20:20.726 16:10:51 -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:20:20.726 [global] 00:20:20.726 thread=1 00:20:20.726 invalidate=1 00:20:20.726 rw=write 00:20:20.726 time_based=1 00:20:20.726 runtime=1 00:20:20.726 ioengine=libaio 00:20:20.726 direct=1 00:20:20.726 bs=4096 00:20:20.726 iodepth=1 00:20:20.726 norandommap=0 00:20:20.726 numjobs=1 00:20:20.726 00:20:20.726 verify_dump=1 00:20:20.726 verify_backlog=512 00:20:20.726 verify_state_save=0 00:20:20.726 do_verify=1 00:20:20.726 verify=crc32c-intel 00:20:20.726 [job0] 00:20:20.726 filename=/dev/nvme0n1 00:20:20.726 [job1] 00:20:20.726 filename=/dev/nvme0n2 00:20:20.726 [job2] 00:20:20.726 filename=/dev/nvme0n3 00:20:20.726 [job3] 00:20:20.726 filename=/dev/nvme0n4 00:20:20.726 Could not set queue depth (nvme0n1) 00:20:20.726 Could not set queue depth (nvme0n2) 00:20:20.726 Could not set queue depth (nvme0n3) 00:20:20.726 Could not set queue depth (nvme0n4) 00:20:20.985 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:20.985 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:20.985 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:20.985 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:20.985 fio-3.35 00:20:20.985 Starting 4 threads 00:20:22.363 00:20:22.363 job0: (groupid=0, jobs=1): err= 0: pid=1382843: Wed Nov 20 16:10:52 2024 00:20:22.363 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:22.363 slat (nsec): min=8229, max=23791, avg=9006.62, stdev=864.28 00:20:22.363 clat (usec): min=66, max=169, avg=110.04, stdev=17.71 00:20:22.363 lat (usec): min=75, max=177, avg=119.05, stdev=17.77 00:20:22.363 clat percentiles (usec): 00:20:22.363 | 1.00th=[ 72], 5.00th=[ 76], 10.00th=[ 79], 20.00th=[ 98], 00:20:22.363 | 30.00th=[ 108], 40.00th=[ 112], 50.00th=[ 116], 60.00th=[ 118], 00:20:22.363 | 70.00th=[ 121], 80.00th=[ 124], 90.00th=[ 128], 95.00th=[ 133], 00:20:22.363 | 99.00th=[ 143], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 165], 00:20:22.363 | 99.99th=[ 169] 00:20:22.363 write: IOPS=4184, BW=16.3MiB/s (17.1MB/s)(16.4MiB/1001msec); 0 zone resets 00:20:22.363 slat (nsec): min=10583, max=66434, avg=11446.47, stdev=1386.67 00:20:22.363 clat (usec): min=44, max=188, avg=106.02, stdev=19.79 00:20:22.363 lat (usec): min=73, max=199, avg=117.47, stdev=19.77 00:20:22.363 clat percentiles (usec): 00:20:22.363 | 1.00th=[ 68], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 80], 00:20:22.363 | 30.00th=[ 103], 40.00th=[ 109], 50.00th=[ 113], 60.00th=[ 116], 00:20:22.363 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 126], 95.00th=[ 130], 00:20:22.363 | 99.00th=[ 141], 99.50th=[ 155], 99.90th=[ 169], 99.95th=[ 182], 00:20:22.363 | 99.99th=[ 188] 00:20:22.363 bw ( KiB/s): min=18928, max=18928, per=24.80%, avg=18928.00, stdev= 0.00, samples=1 00:20:22.363 iops : min= 4732, max= 4732, avg=4732.00, stdev= 0.00, samples=1 00:20:22.363 lat (usec) : 50=0.01%, 100=24.70%, 250=75.29% 00:20:22.363 cpu : usr=7.70%, sys=9.80%, ctx=8286, majf=0, minf=1 00:20:22.363 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.363 issued rwts: total=4096,4189,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.363 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.363 job1: (groupid=0, jobs=1): err= 0: pid=1382844: Wed Nov 20 16:10:52 2024 00:20:22.363 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:20:22.363 slat (nsec): min=8203, max=22378, avg=8729.12, stdev=690.69 00:20:22.363 clat (usec): min=63, max=164, avg=81.74, stdev=13.62 00:20:22.363 lat (usec): min=72, max=173, avg=90.47, stdev=13.73 00:20:22.363 clat percentiles (usec): 00:20:22.363 | 1.00th=[ 68], 5.00th=[ 71], 10.00th=[ 72], 20.00th=[ 74], 00:20:22.363 | 30.00th=[ 76], 40.00th=[ 77], 50.00th=[ 78], 60.00th=[ 80], 00:20:22.363 | 70.00th=[ 82], 80.00th=[ 85], 90.00th=[ 99], 95.00th=[ 119], 00:20:22.363 | 99.00th=[ 130], 99.50th=[ 135], 99.90th=[ 145], 99.95th=[ 163], 00:20:22.363 | 99.99th=[ 165] 00:20:22.363 write: IOPS=5624, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1001msec); 0 zone resets 00:20:22.363 slat (nsec): min=10561, max=42664, avg=11340.13, stdev=1137.61 00:20:22.363 clat (usec): min=57, max=170, avg=79.08, stdev=14.20 00:20:22.363 lat (usec): min=71, max=182, avg=90.42, stdev=14.30 00:20:22.363 clat percentiles (usec): 00:20:22.363 | 1.00th=[ 65], 5.00th=[ 68], 10.00th=[ 69], 20.00th=[ 71], 00:20:22.363 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 75], 60.00th=[ 77], 00:20:22.363 | 70.00th=[ 79], 80.00th=[ 82], 90.00th=[ 105], 95.00th=[ 115], 00:20:22.363 | 99.00th=[ 125], 99.50th=[ 130], 99.90th=[ 153], 99.95th=[ 157], 00:20:22.363 | 99.99th=[ 172] 00:20:22.363 bw ( KiB/s): min=20480, max=20480, per=26.83%, avg=20480.00, stdev= 0.00, samples=1 00:20:22.363 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:22.363 lat (usec) : 100=89.27%, 250=10.73% 00:20:22.363 cpu : usr=9.00%, sys=13.60%, ctx=10750, majf=0, minf=1 00:20:22.363 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.363 issued rwts: total=5120,5630,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.363 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.363 job2: (groupid=0, jobs=1): err= 0: pid=1382845: Wed Nov 20 16:10:52 2024 00:20:22.363 read: IOPS=3654, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1001msec) 00:20:22.363 slat (nsec): min=8469, max=30543, avg=10358.67, stdev=1485.64 00:20:22.363 clat (usec): min=83, max=184, avg=115.19, stdev= 9.27 00:20:22.363 lat (usec): min=92, max=206, avg=125.54, stdev= 9.19 00:20:22.363 clat percentiles (usec): 00:20:22.363 | 1.00th=[ 96], 5.00th=[ 102], 10.00th=[ 105], 20.00th=[ 109], 00:20:22.363 | 30.00th=[ 112], 40.00th=[ 114], 50.00th=[ 115], 60.00th=[ 117], 00:20:22.363 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 126], 95.00th=[ 130], 00:20:22.363 | 99.00th=[ 143], 99.50th=[ 153], 99.90th=[ 180], 99.95th=[ 184], 00:20:22.363 | 99.99th=[ 186] 00:20:22.363 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:22.363 slat (nsec): min=10411, max=48073, avg=12525.63, stdev=1898.17 00:20:22.363 clat (usec): min=73, max=164, avg=114.34, stdev= 9.09 00:20:22.363 lat (usec): min=85, max=176, avg=126.86, stdev= 9.17 00:20:22.363 clat percentiles (usec): 00:20:22.363 | 1.00th=[ 92], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 108], 00:20:22.363 | 30.00th=[ 111], 40.00th=[ 113], 50.00th=[ 115], 60.00th=[ 117], 00:20:22.363 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 125], 95.00th=[ 129], 00:20:22.363 | 99.00th=[ 141], 99.50th=[ 145], 99.90th=[ 157], 99.95th=[ 163], 00:20:22.363 | 99.99th=[ 165] 00:20:22.363 bw ( KiB/s): min=16384, max=16384, per=21.46%, avg=16384.00, stdev= 0.00, samples=1 00:20:22.363 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:22.363 lat (usec) : 100=4.17%, 250=95.83% 00:20:22.363 cpu : usr=7.50%, sys=11.80%, ctx=7754, majf=0, minf=1 00:20:22.363 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.363 issued rwts: total=3658,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.363 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.363 job3: (groupid=0, jobs=1): err= 0: pid=1382846: Wed Nov 20 16:10:52 2024 00:20:22.363 read: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec) 00:20:22.363 slat (nsec): min=8433, max=20052, avg=8988.71, stdev=809.47 00:20:22.363 clat (usec): min=71, max=389, avg=85.88, stdev= 7.34 00:20:22.363 lat (usec): min=80, max=398, avg=94.87, stdev= 7.40 00:20:22.363 clat percentiles (usec): 00:20:22.363 | 1.00th=[ 76], 5.00th=[ 78], 10.00th=[ 80], 20.00th=[ 82], 00:20:22.363 | 30.00th=[ 83], 40.00th=[ 84], 50.00th=[ 86], 60.00th=[ 87], 00:20:22.363 | 70.00th=[ 89], 80.00th=[ 91], 90.00th=[ 94], 95.00th=[ 97], 00:20:22.363 | 99.00th=[ 104], 99.50th=[ 106], 99.90th=[ 111], 99.95th=[ 114], 00:20:22.363 | 99.99th=[ 392] 00:20:22.363 write: IOPS=5182, BW=20.2MiB/s (21.2MB/s)(20.3MiB/1001msec); 0 zone resets 00:20:22.363 slat (nsec): min=10465, max=38742, avg=11522.00, stdev=1111.49 00:20:22.364 clat (usec): min=68, max=184, avg=82.43, stdev= 8.16 00:20:22.364 lat (usec): min=79, max=196, avg=93.95, stdev= 8.31 00:20:22.364 clat percentiles (usec): 00:20:22.364 | 1.00th=[ 72], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:20:22.364 | 30.00th=[ 79], 40.00th=[ 81], 50.00th=[ 82], 60.00th=[ 83], 00:20:22.364 | 70.00th=[ 85], 80.00th=[ 87], 90.00th=[ 91], 95.00th=[ 94], 00:20:22.364 | 99.00th=[ 121], 99.50th=[ 135], 99.90th=[ 151], 99.95th=[ 159], 00:20:22.364 | 99.99th=[ 186] 00:20:22.364 bw ( KiB/s): min=20880, max=20880, per=27.35%, avg=20880.00, stdev= 0.00, samples=1 00:20:22.364 iops : min= 5220, max= 5220, avg=5220.00, stdev= 0.00, samples=1 00:20:22.364 lat (usec) : 100=97.80%, 250=2.19%, 500=0.01% 00:20:22.364 cpu : usr=7.90%, sys=14.00%, ctx=10308, majf=0, minf=1 00:20:22.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:22.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:22.364 issued rwts: total=5120,5188,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:22.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:22.364 00:20:22.364 Run status group 0 (all jobs): 00:20:22.364 READ: bw=70.2MiB/s (73.6MB/s), 14.3MiB/s-20.0MiB/s (15.0MB/s-20.9MB/s), io=70.3MiB (73.7MB), run=1001-1001msec 00:20:22.364 WRITE: bw=74.5MiB/s (78.2MB/s), 16.0MiB/s-22.0MiB/s (16.8MB/s-23.0MB/s), io=74.6MiB (78.2MB), run=1001-1001msec 00:20:22.364 00:20:22.364 Disk stats (read/write): 00:20:22.364 nvme0n1: ios=3473/3584, merge=0/0, ticks=341/329, in_queue=670, util=84.37% 00:20:22.364 nvme0n2: ios=4254/4608, merge=0/0, ticks=312/328, in_queue=640, util=85.38% 00:20:22.364 nvme0n3: ios=3072/3410, merge=0/0, ticks=319/361, in_queue=680, util=88.45% 00:20:22.364 nvme0n4: ios=4096/4532, merge=0/0, ticks=309/326, in_queue=635, util=89.50% 00:20:22.364 16:10:52 -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:20:22.364 [global] 00:20:22.364 thread=1 00:20:22.364 invalidate=1 00:20:22.364 rw=randwrite 00:20:22.364 time_based=1 00:20:22.364 runtime=1 00:20:22.364 ioengine=libaio 00:20:22.364 direct=1 00:20:22.364 bs=4096 00:20:22.364 iodepth=1 00:20:22.364 norandommap=0 00:20:22.364 numjobs=1 00:20:22.364 00:20:22.364 verify_dump=1 00:20:22.364 verify_backlog=512 00:20:22.364 verify_state_save=0 00:20:22.364 do_verify=1 00:20:22.364 verify=crc32c-intel 00:20:22.364 [job0] 00:20:22.364 filename=/dev/nvme0n1 00:20:22.364 [job1] 00:20:22.364 filename=/dev/nvme0n2 00:20:22.364 [job2] 00:20:22.364 filename=/dev/nvme0n3 00:20:22.364 [job3] 00:20:22.364 filename=/dev/nvme0n4 00:20:22.364 Could not set queue depth (nvme0n1) 00:20:22.364 Could not set queue depth (nvme0n2) 00:20:22.364 Could not set queue depth (nvme0n3) 00:20:22.364 Could not set queue depth (nvme0n4) 00:20:22.623 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:22.623 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:22.623 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:22.623 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:22.623 fio-3.35 00:20:22.623 Starting 4 threads 00:20:24.002 00:20:24.002 job0: (groupid=0, jobs=1): err= 0: pid=1383268: Wed Nov 20 16:10:54 2024 00:20:24.002 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:24.002 slat (nsec): min=8237, max=30522, avg=9166.17, stdev=1110.96 00:20:24.002 clat (usec): min=66, max=185, avg=108.37, stdev=12.51 00:20:24.002 lat (usec): min=75, max=194, avg=117.53, stdev=12.46 00:20:24.002 clat percentiles (usec): 00:20:24.002 | 1.00th=[ 77], 5.00th=[ 90], 10.00th=[ 95], 20.00th=[ 99], 00:20:24.002 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 108], 60.00th=[ 111], 00:20:24.002 | 70.00th=[ 115], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 129], 00:20:24.002 | 99.00th=[ 141], 99.50th=[ 147], 99.90th=[ 165], 99.95th=[ 167], 00:20:24.002 | 99.99th=[ 186] 00:20:24.002 write: IOPS=4270, BW=16.7MiB/s (17.5MB/s)(16.7MiB/1001msec); 0 zone resets 00:20:24.002 slat (nsec): min=10353, max=44576, avg=11305.49, stdev=1367.55 00:20:24.002 clat (usec): min=62, max=184, avg=104.83, stdev=17.38 00:20:24.002 lat (usec): min=73, max=199, avg=116.14, stdev=17.40 00:20:24.002 clat percentiles (usec): 00:20:24.002 | 1.00th=[ 69], 5.00th=[ 74], 10.00th=[ 84], 20.00th=[ 95], 00:20:24.002 | 30.00th=[ 99], 40.00th=[ 101], 50.00th=[ 104], 60.00th=[ 108], 00:20:24.002 | 70.00th=[ 111], 80.00th=[ 116], 90.00th=[ 126], 95.00th=[ 137], 00:20:24.002 | 99.00th=[ 159], 99.50th=[ 169], 99.90th=[ 184], 99.95th=[ 184], 00:20:24.002 | 99.99th=[ 186] 00:20:24.002 bw ( KiB/s): min=16384, max=16384, per=22.83%, avg=16384.00, stdev= 0.00, samples=1 00:20:24.002 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:24.002 lat (usec) : 100=29.54%, 250=70.46% 00:20:24.002 cpu : usr=7.30%, sys=11.00%, ctx=8371, majf=0, minf=1 00:20:24.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.002 issued rwts: total=4096,4275,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:24.002 job1: (groupid=0, jobs=1): err= 0: pid=1383269: Wed Nov 20 16:10:54 2024 00:20:24.002 read: IOPS=4886, BW=19.1MiB/s (20.0MB/s)(19.1MiB/1001msec) 00:20:24.002 slat (nsec): min=8227, max=26604, avg=9632.09, stdev=1390.85 00:20:24.002 clat (usec): min=65, max=131, avg=87.32, stdev=13.45 00:20:24.002 lat (usec): min=73, max=140, avg=96.95, stdev=13.23 00:20:24.002 clat percentiles (usec): 00:20:24.002 | 1.00th=[ 69], 5.00th=[ 72], 10.00th=[ 74], 20.00th=[ 76], 00:20:24.002 | 30.00th=[ 78], 40.00th=[ 80], 50.00th=[ 83], 60.00th=[ 89], 00:20:24.002 | 70.00th=[ 98], 80.00th=[ 103], 90.00th=[ 108], 95.00th=[ 111], 00:20:24.002 | 99.00th=[ 117], 99.50th=[ 118], 99.90th=[ 125], 99.95th=[ 130], 00:20:24.002 | 99.99th=[ 133] 00:20:24.002 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:20:24.002 slat (nsec): min=10382, max=40587, avg=11962.35, stdev=1634.60 00:20:24.002 clat (usec): min=52, max=144, avg=85.29, stdev=13.85 00:20:24.002 lat (usec): min=72, max=155, avg=97.26, stdev=13.59 00:20:24.002 clat percentiles (usec): 00:20:24.002 | 1.00th=[ 66], 5.00th=[ 69], 10.00th=[ 71], 20.00th=[ 73], 00:20:24.002 | 30.00th=[ 75], 40.00th=[ 78], 50.00th=[ 81], 60.00th=[ 88], 00:20:24.002 | 70.00th=[ 97], 80.00th=[ 101], 90.00th=[ 105], 95.00th=[ 109], 00:20:24.002 | 99.00th=[ 115], 99.50th=[ 117], 99.90th=[ 122], 99.95th=[ 130], 00:20:24.002 | 99.99th=[ 145] 00:20:24.002 bw ( KiB/s): min=23808, max=23808, per=33.17%, avg=23808.00, stdev= 0.00, samples=1 00:20:24.002 iops : min= 5952, max= 5952, avg=5952.00, stdev= 0.00, samples=1 00:20:24.002 lat (usec) : 100=76.89%, 250=23.11% 00:20:24.002 cpu : usr=8.80%, sys=14.20%, ctx=10011, majf=0, minf=1 00:20:24.002 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.002 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.002 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.002 issued rwts: total=4891,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.002 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:24.002 job2: (groupid=0, jobs=1): err= 0: pid=1383275: Wed Nov 20 16:10:54 2024 00:20:24.002 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:20:24.002 slat (nsec): min=8525, max=39099, avg=11042.44, stdev=3470.01 00:20:24.002 clat (usec): min=67, max=168, avg=102.16, stdev=17.24 00:20:24.003 lat (usec): min=77, max=179, avg=113.20, stdev=17.09 00:20:24.003 clat percentiles (usec): 00:20:24.003 | 1.00th=[ 76], 5.00th=[ 81], 10.00th=[ 83], 20.00th=[ 86], 00:20:24.003 | 30.00th=[ 89], 40.00th=[ 92], 50.00th=[ 98], 60.00th=[ 111], 00:20:24.003 | 70.00th=[ 116], 80.00th=[ 120], 90.00th=[ 125], 95.00th=[ 129], 00:20:24.003 | 99.00th=[ 141], 99.50th=[ 149], 99.90th=[ 161], 99.95th=[ 165], 00:20:24.003 | 99.99th=[ 169] 00:20:24.003 write: IOPS=4467, BW=17.5MiB/s (18.3MB/s)(17.5MiB/1001msec); 0 zone resets 00:20:24.003 slat (nsec): min=10210, max=37620, avg=12802.19, stdev=3151.76 00:20:24.003 clat (usec): min=64, max=209, avg=101.79, stdev=19.84 00:20:24.003 lat (usec): min=76, max=220, avg=114.59, stdev=19.24 00:20:24.003 clat percentiles (usec): 00:20:24.003 | 1.00th=[ 73], 5.00th=[ 77], 10.00th=[ 80], 20.00th=[ 83], 00:20:24.003 | 30.00th=[ 86], 40.00th=[ 91], 50.00th=[ 104], 60.00th=[ 110], 00:20:24.003 | 70.00th=[ 114], 80.00th=[ 118], 90.00th=[ 126], 95.00th=[ 135], 00:20:24.003 | 99.00th=[ 159], 99.50th=[ 172], 99.90th=[ 182], 99.95th=[ 186], 00:20:24.003 | 99.99th=[ 210] 00:20:24.003 bw ( KiB/s): min=20480, max=20480, per=28.53%, avg=20480.00, stdev= 0.00, samples=1 00:20:24.003 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:20:24.003 lat (usec) : 100=49.65%, 250=50.35% 00:20:24.003 cpu : usr=7.00%, sys=11.00%, ctx=8568, majf=0, minf=1 00:20:24.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.003 issued rwts: total=4096,4472,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:24.003 job3: (groupid=0, jobs=1): err= 0: pid=1383277: Wed Nov 20 16:10:54 2024 00:20:24.003 read: IOPS=3677, BW=14.4MiB/s (15.1MB/s)(14.4MiB/1001msec) 00:20:24.003 slat (nsec): min=8444, max=29524, avg=9069.83, stdev=863.90 00:20:24.003 clat (usec): min=77, max=206, avg=118.06, stdev= 9.33 00:20:24.003 lat (usec): min=86, max=215, avg=127.13, stdev= 9.36 00:20:24.003 clat percentiles (usec): 00:20:24.003 | 1.00th=[ 97], 5.00th=[ 104], 10.00th=[ 109], 20.00th=[ 112], 00:20:24.003 | 30.00th=[ 115], 40.00th=[ 117], 50.00th=[ 118], 60.00th=[ 120], 00:20:24.003 | 70.00th=[ 122], 80.00th=[ 125], 90.00th=[ 129], 95.00th=[ 133], 00:20:24.003 | 99.00th=[ 149], 99.50th=[ 153], 99.90th=[ 167], 99.95th=[ 182], 00:20:24.003 | 99.99th=[ 206] 00:20:24.003 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:20:24.003 slat (nsec): min=10334, max=40575, avg=11294.95, stdev=936.54 00:20:24.003 clat (usec): min=66, max=175, avg=113.89, stdev=11.85 00:20:24.003 lat (usec): min=77, max=188, avg=125.18, stdev=11.91 00:20:24.003 clat percentiles (usec): 00:20:24.003 | 1.00th=[ 84], 5.00th=[ 97], 10.00th=[ 101], 20.00th=[ 106], 00:20:24.003 | 30.00th=[ 109], 40.00th=[ 111], 50.00th=[ 114], 60.00th=[ 116], 00:20:24.003 | 70.00th=[ 119], 80.00th=[ 122], 90.00th=[ 129], 95.00th=[ 135], 00:20:24.003 | 99.00th=[ 151], 99.50th=[ 155], 99.90th=[ 163], 99.95th=[ 169], 00:20:24.003 | 99.99th=[ 176] 00:20:24.003 bw ( KiB/s): min=16384, max=16384, per=22.83%, avg=16384.00, stdev= 0.00, samples=1 00:20:24.003 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:20:24.003 lat (usec) : 100=5.08%, 250=94.92% 00:20:24.003 cpu : usr=5.50%, sys=11.10%, ctx=7777, majf=0, minf=1 00:20:24.003 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:24.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:24.003 issued rwts: total=3681,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:24.003 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:24.003 00:20:24.003 Run status group 0 (all jobs): 00:20:24.003 READ: bw=65.4MiB/s (68.6MB/s), 14.4MiB/s-19.1MiB/s (15.1MB/s-20.0MB/s), io=65.5MiB (68.7MB), run=1001-1001msec 00:20:24.003 WRITE: bw=70.1MiB/s (73.5MB/s), 16.0MiB/s-20.0MiB/s (16.8MB/s-20.9MB/s), io=70.2MiB (73.6MB), run=1001-1001msec 00:20:24.003 00:20:24.003 Disk stats (read/write): 00:20:24.003 nvme0n1: ios=3129/3584, merge=0/0, ticks=315/345, in_queue=660, util=81.44% 00:20:24.003 nvme0n2: ios=4096/4208, merge=0/0, ticks=310/290, in_queue=600, util=82.84% 00:20:24.003 nvme0n3: ios=3468/3584, merge=0/0, ticks=327/325, in_queue=652, util=87.51% 00:20:24.003 nvme0n4: ios=3072/3189, merge=0/0, ticks=334/332, in_queue=666, util=89.16% 00:20:24.003 16:10:54 -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:20:24.003 [global] 00:20:24.003 thread=1 00:20:24.003 invalidate=1 00:20:24.003 rw=write 00:20:24.003 time_based=1 00:20:24.003 runtime=1 00:20:24.003 ioengine=libaio 00:20:24.003 direct=1 00:20:24.003 bs=4096 00:20:24.003 iodepth=128 00:20:24.003 norandommap=0 00:20:24.003 numjobs=1 00:20:24.003 00:20:24.003 verify_dump=1 00:20:24.003 verify_backlog=512 00:20:24.003 verify_state_save=0 00:20:24.003 do_verify=1 00:20:24.003 verify=crc32c-intel 00:20:24.003 [job0] 00:20:24.003 filename=/dev/nvme0n1 00:20:24.003 [job1] 00:20:24.003 filename=/dev/nvme0n2 00:20:24.003 [job2] 00:20:24.003 filename=/dev/nvme0n3 00:20:24.003 [job3] 00:20:24.003 filename=/dev/nvme0n4 00:20:24.003 Could not set queue depth (nvme0n1) 00:20:24.003 Could not set queue depth (nvme0n2) 00:20:24.003 Could not set queue depth (nvme0n3) 00:20:24.003 Could not set queue depth (nvme0n4) 00:20:24.261 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:24.261 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:24.261 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:24.261 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:24.261 fio-3.35 00:20:24.261 Starting 4 threads 00:20:25.641 00:20:25.641 job0: (groupid=0, jobs=1): err= 0: pid=1383702: Wed Nov 20 16:10:56 2024 00:20:25.641 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:20:25.641 slat (usec): min=2, max=798, avg=89.42, stdev=227.05 00:20:25.641 clat (usec): min=5345, max=13083, avg=11478.04, stdev=585.21 00:20:25.641 lat (usec): min=5349, max=13086, avg=11567.46, stdev=543.50 00:20:25.641 clat percentiles (usec): 00:20:25.641 | 1.00th=[10159], 5.00th=[10683], 10.00th=[10945], 20.00th=[11207], 00:20:25.641 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11600], 60.00th=[11600], 00:20:25.641 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:25.641 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13042], 99.95th=[13042], 00:20:25.641 | 99.99th=[13042] 00:20:25.641 write: IOPS=5648, BW=22.1MiB/s (23.1MB/s)(22.1MiB/1003msec); 0 zone resets 00:20:25.641 slat (usec): min=2, max=1351, avg=85.20, stdev=217.10 00:20:25.641 clat (usec): min=1785, max=12018, avg=10972.09, stdev=731.52 00:20:25.641 lat (usec): min=2489, max=12200, avg=11057.29, stdev=700.10 00:20:25.641 clat percentiles (usec): 00:20:25.641 | 1.00th=[ 9896], 5.00th=[10290], 10.00th=[10421], 20.00th=[10683], 00:20:25.641 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:25.641 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[11863], 00:20:25.641 | 99.00th=[11994], 99.50th=[11994], 99.90th=[11994], 99.95th=[11994], 00:20:25.641 | 99.99th=[11994] 00:20:25.641 bw ( KiB/s): min=21032, max=23976, per=24.78%, avg=22504.00, stdev=2081.72, samples=2 00:20:25.641 iops : min= 5258, max= 5994, avg=5626.00, stdev=520.43, samples=2 00:20:25.641 lat (msec) : 2=0.01%, 4=0.18%, 10=1.43%, 20=98.39% 00:20:25.641 cpu : usr=1.60%, sys=4.09%, ctx=1645, majf=0, minf=1 00:20:25.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:25.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.641 issued rwts: total=5632,5665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.641 job1: (groupid=0, jobs=1): err= 0: pid=1383703: Wed Nov 20 16:10:56 2024 00:20:25.641 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:20:25.641 slat (usec): min=2, max=1550, avg=88.79, stdev=233.44 00:20:25.641 clat (usec): min=10266, max=14605, avg=11515.84, stdev=431.59 00:20:25.641 lat (usec): min=10296, max=14608, avg=11604.63, stdev=409.77 00:20:25.641 clat percentiles (usec): 00:20:25.641 | 1.00th=[10552], 5.00th=[10814], 10.00th=[11076], 20.00th=[11207], 00:20:25.641 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11600], 60.00th=[11600], 00:20:25.641 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:25.641 | 99.00th=[12387], 99.50th=[12911], 99.90th=[14615], 99.95th=[14615], 00:20:25.641 | 99.99th=[14615] 00:20:25.641 write: IOPS=5706, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1004msec); 0 zone resets 00:20:25.641 slat (usec): min=2, max=1441, avg=84.87, stdev=222.80 00:20:25.641 clat (usec): min=3448, max=12691, avg=10886.44, stdev=748.58 00:20:25.641 lat (usec): min=4142, max=12695, avg=10971.31, stdev=739.36 00:20:25.641 clat percentiles (usec): 00:20:25.641 | 1.00th=[ 7570], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:20:25.641 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:25.641 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11863], 00:20:25.641 | 99.00th=[11994], 99.50th=[11994], 99.90th=[12387], 99.95th=[12518], 00:20:25.641 | 99.99th=[12649] 00:20:25.641 bw ( KiB/s): min=20744, max=24312, per=24.80%, avg=22528.00, stdev=2522.96, samples=2 00:20:25.641 iops : min= 5186, max= 6078, avg=5632.00, stdev=630.74, samples=2 00:20:25.641 lat (msec) : 4=0.01%, 10=2.24%, 20=97.76% 00:20:25.641 cpu : usr=1.79%, sys=4.09%, ctx=1574, majf=0, minf=2 00:20:25.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:25.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.641 issued rwts: total=5632,5729,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.641 job2: (groupid=0, jobs=1): err= 0: pid=1383704: Wed Nov 20 16:10:56 2024 00:20:25.641 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:20:25.641 slat (usec): min=2, max=811, avg=89.37, stdev=226.56 00:20:25.641 clat (usec): min=5356, max=13095, avg=11477.67, stdev=587.40 00:20:25.641 lat (usec): min=5359, max=13099, avg=11567.03, stdev=548.04 00:20:25.641 clat percentiles (usec): 00:20:25.641 | 1.00th=[ 9896], 5.00th=[10683], 10.00th=[10945], 20.00th=[11207], 00:20:25.641 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11600], 60.00th=[11600], 00:20:25.641 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:25.641 | 99.00th=[12256], 99.50th=[12387], 99.90th=[13042], 99.95th=[13042], 00:20:25.641 | 99.99th=[13042] 00:20:25.641 write: IOPS=5648, BW=22.1MiB/s (23.1MB/s)(22.1MiB/1003msec); 0 zone resets 00:20:25.641 slat (usec): min=2, max=1329, avg=85.16, stdev=216.68 00:20:25.641 clat (usec): min=1798, max=12019, avg=10972.01, stdev=732.70 00:20:25.641 lat (usec): min=2500, max=12023, avg=11057.17, stdev=700.98 00:20:25.641 clat percentiles (usec): 00:20:25.641 | 1.00th=[ 9896], 5.00th=[10159], 10.00th=[10421], 20.00th=[10683], 00:20:25.641 | 30.00th=[10814], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:25.641 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11731], 95.00th=[11863], 00:20:25.641 | 99.00th=[11994], 99.50th=[11994], 99.90th=[11994], 99.95th=[11994], 00:20:25.641 | 99.99th=[11994] 00:20:25.641 bw ( KiB/s): min=21032, max=24024, per=24.80%, avg=22528.00, stdev=2115.66, samples=2 00:20:25.641 iops : min= 5258, max= 6006, avg=5632.00, stdev=528.92, samples=2 00:20:25.641 lat (msec) : 2=0.01%, 4=0.21%, 10=1.41%, 20=98.37% 00:20:25.641 cpu : usr=1.60%, sys=4.29%, ctx=1674, majf=0, minf=1 00:20:25.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:25.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.641 issued rwts: total=5632,5665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.641 job3: (groupid=0, jobs=1): err= 0: pid=1383705: Wed Nov 20 16:10:56 2024 00:20:25.641 read: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec) 00:20:25.641 slat (usec): min=2, max=1545, avg=89.38, stdev=232.39 00:20:25.641 clat (usec): min=10126, max=14609, avg=11513.71, stdev=427.95 00:20:25.641 lat (usec): min=10129, max=14613, avg=11603.09, stdev=408.50 00:20:25.641 clat percentiles (usec): 00:20:25.641 | 1.00th=[10552], 5.00th=[10814], 10.00th=[11076], 20.00th=[11207], 00:20:25.641 | 30.00th=[11338], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:20:25.641 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[12125], 00:20:25.641 | 99.00th=[12387], 99.50th=[12780], 99.90th=[14615], 99.95th=[14615], 00:20:25.641 | 99.99th=[14615] 00:20:25.641 write: IOPS=5714, BW=22.3MiB/s (23.4MB/s)(22.4MiB/1004msec); 0 zone resets 00:20:25.641 slat (usec): min=2, max=1430, avg=84.14, stdev=218.68 00:20:25.641 clat (usec): min=3450, max=13091, avg=10874.34, stdev=736.52 00:20:25.641 lat (usec): min=4152, max=13095, avg=10958.48, stdev=726.72 00:20:25.641 clat percentiles (usec): 00:20:25.641 | 1.00th=[ 7373], 5.00th=[10028], 10.00th=[10290], 20.00th=[10552], 00:20:25.641 | 30.00th=[10683], 40.00th=[10814], 50.00th=[10945], 60.00th=[11076], 00:20:25.641 | 70.00th=[11207], 80.00th=[11338], 90.00th=[11600], 95.00th=[11863], 00:20:25.641 | 99.00th=[11863], 99.50th=[11994], 99.90th=[12256], 99.95th=[12518], 00:20:25.641 | 99.99th=[13042] 00:20:25.641 bw ( KiB/s): min=20640, max=24416, per=24.80%, avg=22528.00, stdev=2670.04, samples=2 00:20:25.641 iops : min= 5160, max= 6104, avg=5632.00, stdev=667.51, samples=2 00:20:25.641 lat (msec) : 4=0.01%, 10=2.23%, 20=97.76% 00:20:25.641 cpu : usr=2.19%, sys=3.69%, ctx=1577, majf=0, minf=2 00:20:25.641 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:25.641 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:25.641 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:25.641 issued rwts: total=5632,5737,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:25.641 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:25.641 00:20:25.641 Run status group 0 (all jobs): 00:20:25.641 READ: bw=87.6MiB/s (91.9MB/s), 21.9MiB/s-21.9MiB/s (23.0MB/s-23.0MB/s), io=88.0MiB (92.3MB), run=1003-1004msec 00:20:25.641 WRITE: bw=88.7MiB/s (93.0MB/s), 22.1MiB/s-22.3MiB/s (23.1MB/s-23.4MB/s), io=89.0MiB (93.4MB), run=1003-1004msec 00:20:25.641 00:20:25.641 Disk stats (read/write): 00:20:25.641 nvme0n1: ios=4657/4826, merge=0/0, ticks=13423/13153, in_queue=26576, util=84.65% 00:20:25.641 nvme0n2: ios=4608/4866, merge=0/0, ticks=26250/25899, in_queue=52149, util=85.41% 00:20:25.642 nvme0n3: ios=4608/4832, merge=0/0, ticks=13393/13171, in_queue=26564, util=88.57% 00:20:25.642 nvme0n4: ios=4608/4862, merge=0/0, ticks=26203/25872, in_queue=52075, util=89.52% 00:20:25.642 16:10:56 -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:20:25.642 [global] 00:20:25.642 thread=1 00:20:25.642 invalidate=1 00:20:25.642 rw=randwrite 00:20:25.642 time_based=1 00:20:25.642 runtime=1 00:20:25.642 ioengine=libaio 00:20:25.642 direct=1 00:20:25.642 bs=4096 00:20:25.642 iodepth=128 00:20:25.642 norandommap=0 00:20:25.642 numjobs=1 00:20:25.642 00:20:25.642 verify_dump=1 00:20:25.642 verify_backlog=512 00:20:25.642 verify_state_save=0 00:20:25.642 do_verify=1 00:20:25.642 verify=crc32c-intel 00:20:25.642 [job0] 00:20:25.642 filename=/dev/nvme0n1 00:20:25.642 [job1] 00:20:25.642 filename=/dev/nvme0n2 00:20:25.642 [job2] 00:20:25.642 filename=/dev/nvme0n3 00:20:25.642 [job3] 00:20:25.642 filename=/dev/nvme0n4 00:20:25.642 Could not set queue depth (nvme0n1) 00:20:25.642 Could not set queue depth (nvme0n2) 00:20:25.642 Could not set queue depth (nvme0n3) 00:20:25.642 Could not set queue depth (nvme0n4) 00:20:25.900 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:25.900 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:25.900 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:25.900 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:20:25.900 fio-3.35 00:20:25.900 Starting 4 threads 00:20:27.277 00:20:27.277 job0: (groupid=0, jobs=1): err= 0: pid=1384129: Wed Nov 20 16:10:57 2024 00:20:27.277 read: IOPS=10.2k, BW=40.0MiB/s (41.9MB/s)(40.0MiB/1001msec) 00:20:27.277 slat (usec): min=2, max=863, avg=48.44, stdev=174.26 00:20:27.277 clat (usec): min=4719, max=7543, avg=6289.49, stdev=506.08 00:20:27.277 lat (usec): min=4722, max=7547, avg=6337.93, stdev=525.78 00:20:27.277 clat percentiles (usec): 00:20:27.277 | 1.00th=[ 4948], 5.00th=[ 5080], 10.00th=[ 5604], 20.00th=[ 6128], 00:20:27.277 | 30.00th=[ 6194], 40.00th=[ 6259], 50.00th=[ 6325], 60.00th=[ 6390], 00:20:27.277 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6915], 95.00th=[ 7046], 00:20:27.277 | 99.00th=[ 7308], 99.50th=[ 7308], 99.90th=[ 7439], 99.95th=[ 7504], 00:20:27.277 | 99.99th=[ 7504] 00:20:27.277 write: IOPS=10.6k, BW=41.2MiB/s (43.2MB/s)(41.3MiB/1001msec); 0 zone resets 00:20:27.277 slat (usec): min=2, max=1087, avg=45.35, stdev=162.04 00:20:27.277 clat (usec): min=499, max=7345, avg=5912.43, stdev=597.96 00:20:27.277 lat (usec): min=1128, max=7349, avg=5957.78, stdev=612.56 00:20:27.277 clat percentiles (usec): 00:20:27.277 | 1.00th=[ 4490], 5.00th=[ 4752], 10.00th=[ 4883], 20.00th=[ 5735], 00:20:27.277 | 30.00th=[ 5866], 40.00th=[ 5932], 50.00th=[ 5997], 60.00th=[ 6063], 00:20:27.277 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6587], 95.00th=[ 6718], 00:20:27.277 | 99.00th=[ 6915], 99.50th=[ 6980], 99.90th=[ 7177], 99.95th=[ 7242], 00:20:27.277 | 99.99th=[ 7308] 00:20:27.277 bw ( KiB/s): min=43736, max=43736, per=39.28%, avg=43736.00, stdev= 0.00, samples=1 00:20:27.277 iops : min=10934, max=10934, avg=10934.00, stdev= 0.00, samples=1 00:20:27.277 lat (usec) : 500=0.01% 00:20:27.277 lat (msec) : 2=0.15%, 4=0.15%, 10=99.69% 00:20:27.277 cpu : usr=3.50%, sys=5.80%, ctx=1344, majf=0, minf=1 00:20:27.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:20:27.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:27.277 issued rwts: total=10240,10561,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.277 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:27.277 job1: (groupid=0, jobs=1): err= 0: pid=1384130: Wed Nov 20 16:10:57 2024 00:20:27.277 read: IOPS=5298, BW=20.7MiB/s (21.7MB/s)(20.8MiB/1003msec) 00:20:27.277 slat (usec): min=2, max=3728, avg=91.61, stdev=396.68 00:20:27.277 clat (usec): min=2887, max=16965, avg=11757.78, stdev=1205.22 00:20:27.277 lat (usec): min=3685, max=16968, avg=11849.39, stdev=1248.63 00:20:27.277 clat percentiles (usec): 00:20:27.277 | 1.00th=[ 7898], 5.00th=[10945], 10.00th=[10945], 20.00th=[11076], 00:20:27.277 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11600], 00:20:27.277 | 70.00th=[11863], 80.00th=[12256], 90.00th=[13304], 95.00th=[14091], 00:20:27.277 | 99.00th=[15664], 99.50th=[16319], 99.90th=[16909], 99.95th=[16909], 00:20:27.277 | 99.99th=[16909] 00:20:27.277 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:20:27.277 slat (usec): min=2, max=3742, avg=88.31, stdev=372.10 00:20:27.277 clat (usec): min=8292, max=16713, avg=11457.77, stdev=902.66 00:20:27.277 lat (usec): min=8302, max=16716, avg=11546.08, stdev=956.78 00:20:27.277 clat percentiles (usec): 00:20:27.277 | 1.00th=[10290], 5.00th=[10683], 10.00th=[10683], 20.00th=[10814], 00:20:27.277 | 30.00th=[10945], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:20:27.277 | 70.00th=[11600], 80.00th=[11994], 90.00th=[12518], 95.00th=[13698], 00:20:27.277 | 99.00th=[14353], 99.50th=[14615], 99.90th=[16057], 99.95th=[16712], 00:20:27.277 | 99.99th=[16712] 00:20:27.277 bw ( KiB/s): min=22016, max=23040, per=20.23%, avg=22528.00, stdev=724.08, samples=2 00:20:27.277 iops : min= 5504, max= 5760, avg=5632.00, stdev=181.02, samples=2 00:20:27.277 lat (msec) : 4=0.16%, 10=0.83%, 20=99.00% 00:20:27.277 cpu : usr=1.60%, sys=3.99%, ctx=786, majf=0, minf=1 00:20:27.277 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:20:27.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.277 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:27.277 issued rwts: total=5314,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.277 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:27.277 job2: (groupid=0, jobs=1): err= 0: pid=1384131: Wed Nov 20 16:10:57 2024 00:20:27.278 read: IOPS=7664, BW=29.9MiB/s (31.4MB/s)(30.0MiB/1002msec) 00:20:27.278 slat (usec): min=2, max=2294, avg=63.49, stdev=224.93 00:20:27.278 clat (usec): min=4746, max=22229, avg=8078.90, stdev=1872.03 00:20:27.278 lat (usec): min=4754, max=22241, avg=8142.39, stdev=1886.47 00:20:27.278 clat percentiles (usec): 00:20:27.278 | 1.00th=[ 6849], 5.00th=[ 7111], 10.00th=[ 7177], 20.00th=[ 7308], 00:20:27.278 | 30.00th=[ 7439], 40.00th=[ 7570], 50.00th=[ 7701], 60.00th=[ 8029], 00:20:27.278 | 70.00th=[ 8160], 80.00th=[ 8291], 90.00th=[ 8455], 95.00th=[ 8586], 00:20:27.278 | 99.00th=[19792], 99.50th=[19792], 99.90th=[20579], 99.95th=[21890], 00:20:27.278 | 99.99th=[22152] 00:20:27.278 write: IOPS=7906, BW=30.9MiB/s (32.4MB/s)(30.9MiB/1002msec); 0 zone resets 00:20:27.278 slat (usec): min=2, max=2308, avg=62.33, stdev=219.49 00:20:27.278 clat (usec): min=452, max=21782, avg=8140.11, stdev=2899.81 00:20:27.278 lat (usec): min=1330, max=21791, avg=8202.44, stdev=2920.75 00:20:27.278 clat percentiles (usec): 00:20:27.278 | 1.00th=[ 5669], 5.00th=[ 6718], 10.00th=[ 6849], 20.00th=[ 6980], 00:20:27.278 | 30.00th=[ 7111], 40.00th=[ 7242], 50.00th=[ 7439], 60.00th=[ 7701], 00:20:27.278 | 70.00th=[ 7832], 80.00th=[ 8029], 90.00th=[ 8225], 95.00th=[18744], 00:20:27.278 | 99.00th=[19530], 99.50th=[19792], 99.90th=[21103], 99.95th=[21103], 00:20:27.278 | 99.99th=[21890] 00:20:27.278 bw ( KiB/s): min=28672, max=28672, per=25.75%, avg=28672.00, stdev= 0.00, samples=1 00:20:27.278 iops : min= 7168, max= 7168, avg=7168.00, stdev= 0.00, samples=1 00:20:27.278 lat (usec) : 500=0.01% 00:20:27.278 lat (msec) : 2=0.08%, 4=0.22%, 10=94.82%, 20=4.61%, 50=0.26% 00:20:27.278 cpu : usr=2.30%, sys=4.50%, ctx=1337, majf=0, minf=1 00:20:27.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:20:27.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:27.278 issued rwts: total=7680,7922,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:27.278 job3: (groupid=0, jobs=1): err= 0: pid=1384132: Wed Nov 20 16:10:57 2024 00:20:27.278 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:20:27.278 slat (usec): min=2, max=4458, avg=134.61, stdev=547.91 00:20:27.278 clat (usec): min=15306, max=23157, avg=17401.07, stdev=1204.20 00:20:27.278 lat (usec): min=15332, max=23161, avg=17535.68, stdev=1292.45 00:20:27.278 clat percentiles (usec): 00:20:27.278 | 1.00th=[15664], 5.00th=[16057], 10.00th=[16319], 20.00th=[16581], 00:20:27.278 | 30.00th=[16909], 40.00th=[16909], 50.00th=[17171], 60.00th=[17433], 00:20:27.278 | 70.00th=[17433], 80.00th=[17695], 90.00th=[19792], 95.00th=[20317], 00:20:27.278 | 99.00th=[21103], 99.50th=[21627], 99.90th=[22938], 99.95th=[23200], 00:20:27.278 | 99.99th=[23200] 00:20:27.278 write: IOPS=3817, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1004msec); 0 zone resets 00:20:27.278 slat (usec): min=2, max=4413, avg=131.74, stdev=529.35 00:20:27.278 clat (usec): min=1526, max=21720, avg=16806.22, stdev=1783.27 00:20:27.278 lat (usec): min=4684, max=21724, avg=16937.96, stdev=1838.28 00:20:27.278 clat percentiles (usec): 00:20:27.278 | 1.00th=[ 6259], 5.00th=[15533], 10.00th=[15926], 20.00th=[16057], 00:20:27.278 | 30.00th=[16319], 40.00th=[16450], 50.00th=[16712], 60.00th=[16909], 00:20:27.278 | 70.00th=[17171], 80.00th=[17433], 90.00th=[19006], 95.00th=[19530], 00:20:27.278 | 99.00th=[20579], 99.50th=[20841], 99.90th=[21103], 99.95th=[21365], 00:20:27.278 | 99.99th=[21627] 00:20:27.278 bw ( KiB/s): min=13440, max=16208, per=13.31%, avg=14824.00, stdev=1957.27, samples=2 00:20:27.278 iops : min= 3360, max= 4052, avg=3706.00, stdev=489.32, samples=2 00:20:27.278 lat (msec) : 2=0.01%, 10=0.57%, 20=94.93%, 50=4.49% 00:20:27.278 cpu : usr=1.69%, sys=3.29%, ctx=670, majf=0, minf=1 00:20:27.278 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:20:27.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:27.278 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:27.278 issued rwts: total=3584,3833,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:27.278 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:27.278 00:20:27.278 Run status group 0 (all jobs): 00:20:27.278 READ: bw=104MiB/s (109MB/s), 13.9MiB/s-40.0MiB/s (14.6MB/s-41.9MB/s), io=105MiB (110MB), run=1001-1004msec 00:20:27.278 WRITE: bw=109MiB/s (114MB/s), 14.9MiB/s-41.2MiB/s (15.6MB/s-43.2MB/s), io=109MiB (114MB), run=1001-1004msec 00:20:27.278 00:20:27.278 Disk stats (read/write): 00:20:27.278 nvme0n1: ios=8753/8789, merge=0/0, ticks=13607/12772, in_queue=26379, util=84.65% 00:20:27.278 nvme0n2: ios=4468/4608, merge=0/0, ticks=25840/25884, in_queue=51724, util=85.52% 00:20:27.278 nvme0n3: ios=6200/6656, merge=0/0, ticks=13204/14427, in_queue=27631, util=88.60% 00:20:27.278 nvme0n4: ios=3070/3072, merge=0/0, ticks=17533/16971, in_queue=34504, util=89.54% 00:20:27.278 16:10:57 -- target/fio.sh@55 -- # sync 00:20:27.278 16:10:57 -- target/fio.sh@59 -- # fio_pid=1384400 00:20:27.278 16:10:57 -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:20:27.278 16:10:57 -- target/fio.sh@61 -- # sleep 3 00:20:27.278 [global] 00:20:27.278 thread=1 00:20:27.278 invalidate=1 00:20:27.278 rw=read 00:20:27.278 time_based=1 00:20:27.278 runtime=10 00:20:27.278 ioengine=libaio 00:20:27.278 direct=1 00:20:27.278 bs=4096 00:20:27.278 iodepth=1 00:20:27.278 norandommap=1 00:20:27.278 numjobs=1 00:20:27.278 00:20:27.278 [job0] 00:20:27.278 filename=/dev/nvme0n1 00:20:27.278 [job1] 00:20:27.278 filename=/dev/nvme0n2 00:20:27.278 [job2] 00:20:27.278 filename=/dev/nvme0n3 00:20:27.278 [job3] 00:20:27.278 filename=/dev/nvme0n4 00:20:27.278 Could not set queue depth (nvme0n1) 00:20:27.278 Could not set queue depth (nvme0n2) 00:20:27.278 Could not set queue depth (nvme0n3) 00:20:27.278 Could not set queue depth (nvme0n4) 00:20:27.278 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:27.278 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:27.278 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:27.278 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:20:27.278 fio-3.35 00:20:27.278 Starting 4 threads 00:20:30.565 16:11:00 -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:20:30.565 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=103383040, buflen=4096 00:20:30.565 fio: pid=1384561, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:30.565 16:11:00 -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:20:30.565 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=93089792, buflen=4096 00:20:30.565 fio: pid=1384560, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:30.565 16:11:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.565 16:11:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:20:30.565 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=61493248, buflen=4096 00:20:30.565 fio: pid=1384556, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:30.565 16:11:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.565 16:11:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:20:30.826 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=44122112, buflen=4096 00:20:30.826 fio: pid=1384557, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:20:30.826 16:11:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:30.826 16:11:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:20:30.826 00:20:30.826 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1384556: Wed Nov 20 16:11:01 2024 00:20:30.826 read: IOPS=10.4k, BW=40.5MiB/s (42.5MB/s)(123MiB/3027msec) 00:20:30.826 slat (usec): min=7, max=14284, avg= 9.99, stdev=123.62 00:20:30.826 clat (usec): min=47, max=21798, avg=84.22, stdev=122.93 00:20:30.826 lat (usec): min=55, max=21807, avg=94.21, stdev=174.32 00:20:30.826 clat percentiles (usec): 00:20:30.826 | 1.00th=[ 59], 5.00th=[ 74], 10.00th=[ 76], 20.00th=[ 78], 00:20:30.826 | 30.00th=[ 80], 40.00th=[ 81], 50.00th=[ 83], 60.00th=[ 84], 00:20:30.826 | 70.00th=[ 86], 80.00th=[ 89], 90.00th=[ 94], 95.00th=[ 100], 00:20:30.826 | 99.00th=[ 117], 99.50th=[ 135], 99.90th=[ 155], 99.95th=[ 174], 00:20:30.826 | 99.99th=[ 190] 00:20:30.826 bw ( KiB/s): min=41336, max=43032, per=32.45%, avg=42626.00, stdev=728.20, samples=5 00:20:30.826 iops : min=10334, max=10758, avg=10656.40, stdev=182.02, samples=5 00:20:30.826 lat (usec) : 50=0.08%, 100=95.03%, 250=4.88%, 500=0.01% 00:20:30.826 lat (msec) : 50=0.01% 00:20:30.826 cpu : usr=5.16%, sys=14.01%, ctx=31403, majf=0, minf=2 00:20:30.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.826 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.826 issued rwts: total=31398,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:30.826 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1384557: Wed Nov 20 16:11:01 2024 00:20:30.826 read: IOPS=8371, BW=32.7MiB/s (34.3MB/s)(106MiB/3244msec) 00:20:30.826 slat (usec): min=7, max=16750, avg=11.98, stdev=207.88 00:20:30.826 clat (usec): min=39, max=21898, avg=105.13, stdev=188.70 00:20:30.826 lat (usec): min=56, max=21906, avg=117.11, stdev=280.69 00:20:30.826 clat percentiles (usec): 00:20:30.826 | 1.00th=[ 55], 5.00th=[ 59], 10.00th=[ 68], 20.00th=[ 76], 00:20:30.826 | 30.00th=[ 81], 40.00th=[ 98], 50.00th=[ 113], 60.00th=[ 120], 00:20:30.826 | 70.00th=[ 125], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 137], 00:20:30.826 | 99.00th=[ 155], 99.50th=[ 167], 99.90th=[ 182], 99.95th=[ 188], 00:20:30.826 | 99.99th=[ 229] 00:20:30.826 bw ( KiB/s): min=29952, max=36960, per=24.94%, avg=32755.00, stdev=2564.67, samples=6 00:20:30.826 iops : min= 7488, max= 9240, avg=8188.67, stdev=641.21, samples=6 00:20:30.826 lat (usec) : 50=0.09%, 100=40.96%, 250=58.94% 00:20:30.826 lat (msec) : 50=0.01% 00:20:30.826 cpu : usr=4.01%, sys=11.35%, ctx=27165, majf=0, minf=2 00:20:30.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.826 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.826 issued rwts: total=27157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:30.826 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1384560: Wed Nov 20 16:11:01 2024 00:20:30.826 read: IOPS=8019, BW=31.3MiB/s (32.8MB/s)(88.8MiB/2834msec) 00:20:30.826 slat (usec): min=7, max=13695, avg=11.33, stdev=111.97 00:20:30.826 clat (usec): min=64, max=8629, avg=110.79, stdev=59.73 00:20:30.826 lat (usec): min=76, max=13799, avg=122.11, stdev=126.89 00:20:30.826 clat percentiles (usec): 00:20:30.826 | 1.00th=[ 77], 5.00th=[ 81], 10.00th=[ 84], 20.00th=[ 89], 00:20:30.826 | 30.00th=[ 96], 40.00th=[ 108], 50.00th=[ 116], 60.00th=[ 120], 00:20:30.826 | 70.00th=[ 124], 80.00th=[ 128], 90.00th=[ 133], 95.00th=[ 137], 00:20:30.826 | 99.00th=[ 153], 99.50th=[ 163], 99.90th=[ 176], 99.95th=[ 180], 00:20:30.826 | 99.99th=[ 208] 00:20:30.826 bw ( KiB/s): min=29668, max=35536, per=24.79%, avg=32559.20, stdev=2661.81, samples=5 00:20:30.826 iops : min= 7417, max= 8884, avg=8139.80, stdev=665.45, samples=5 00:20:30.826 lat (usec) : 100=33.07%, 250=66.91%, 750=0.01% 00:20:30.826 lat (msec) : 10=0.01% 00:20:30.826 cpu : usr=3.71%, sys=12.00%, ctx=22731, majf=0, minf=1 00:20:30.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.826 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.826 issued rwts: total=22728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:30.826 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1384561: Wed Nov 20 16:11:01 2024 00:20:30.826 read: IOPS=9619, BW=37.6MiB/s (39.4MB/s)(98.6MiB/2624msec) 00:20:30.826 slat (nsec): min=8124, max=35471, avg=8847.82, stdev=970.61 00:20:30.826 clat (usec): min=70, max=202, avg=93.31, stdev=15.91 00:20:30.826 lat (usec): min=79, max=211, avg=102.16, stdev=16.05 00:20:30.826 clat percentiles (usec): 00:20:30.826 | 1.00th=[ 77], 5.00th=[ 80], 10.00th=[ 81], 20.00th=[ 83], 00:20:30.826 | 30.00th=[ 85], 40.00th=[ 87], 50.00th=[ 88], 60.00th=[ 91], 00:20:30.826 | 70.00th=[ 93], 80.00th=[ 99], 90.00th=[ 122], 95.00th=[ 130], 00:20:30.826 | 99.00th=[ 145], 99.50th=[ 155], 99.90th=[ 172], 99.95th=[ 176], 00:20:30.826 | 99.99th=[ 188] 00:20:30.826 bw ( KiB/s): min=33008, max=40968, per=29.35%, avg=38550.20, stdev=3196.13, samples=5 00:20:30.826 iops : min= 8252, max=10242, avg=9637.40, stdev=798.94, samples=5 00:20:30.826 lat (usec) : 100=81.10%, 250=18.90% 00:20:30.826 cpu : usr=3.85%, sys=14.07%, ctx=25241, majf=0, minf=2 00:20:30.826 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:30.826 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.826 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:30.826 issued rwts: total=25241,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:30.826 latency : target=0, window=0, percentile=100.00%, depth=1 00:20:30.826 00:20:30.826 Run status group 0 (all jobs): 00:20:30.826 READ: bw=128MiB/s (134MB/s), 31.3MiB/s-40.5MiB/s (32.8MB/s-42.5MB/s), io=416MiB (436MB), run=2624-3244msec 00:20:30.826 00:20:30.826 Disk stats (read/write): 00:20:30.826 nvme0n1: ios=29590/0, merge=0/0, ticks=2260/0, in_queue=2260, util=94.05% 00:20:30.826 nvme0n2: ios=25212/0, merge=0/0, ticks=2507/0, in_queue=2507, util=93.18% 00:20:30.826 nvme0n3: ios=20960/0, merge=0/0, ticks=2148/0, in_queue=2148, util=96.03% 00:20:30.826 nvme0n4: ios=24939/0, merge=0/0, ticks=2075/0, in_queue=2075, util=96.46% 00:20:31.086 16:11:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:31.086 16:11:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:20:31.345 16:11:01 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:31.345 16:11:01 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:20:31.345 16:11:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:31.345 16:11:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:20:31.604 16:11:02 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:20:31.604 16:11:02 -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:20:31.864 16:11:02 -- target/fio.sh@69 -- # fio_status=0 00:20:31.864 16:11:02 -- target/fio.sh@70 -- # wait 1384400 00:20:31.864 16:11:02 -- target/fio.sh@70 -- # fio_status=4 00:20:31.864 16:11:02 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:32.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:32.801 16:11:03 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:32.801 16:11:03 -- common/autotest_common.sh@1208 -- # local i=0 00:20:32.801 16:11:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:20:32.801 16:11:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:32.801 16:11:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:20:32.801 16:11:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:32.801 16:11:03 -- common/autotest_common.sh@1220 -- # return 0 00:20:32.801 16:11:03 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:20:32.801 16:11:03 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:20:32.801 nvmf hotplug test: fio failed as expected 00:20:32.801 16:11:03 -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:33.061 16:11:03 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:20:33.061 16:11:03 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:20:33.061 16:11:03 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:20:33.061 16:11:03 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:20:33.061 16:11:03 -- target/fio.sh@91 -- # nvmftestfini 00:20:33.061 16:11:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:33.061 16:11:03 -- nvmf/common.sh@116 -- # sync 00:20:33.062 16:11:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:33.062 16:11:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:33.062 16:11:03 -- nvmf/common.sh@119 -- # set +e 00:20:33.062 16:11:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:33.062 16:11:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:33.062 rmmod nvme_rdma 00:20:33.062 rmmod nvme_fabrics 00:20:33.062 16:11:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:33.062 16:11:03 -- nvmf/common.sh@123 -- # set -e 00:20:33.062 16:11:03 -- nvmf/common.sh@124 -- # return 0 00:20:33.062 16:11:03 -- nvmf/common.sh@477 -- # '[' -n 1381271 ']' 00:20:33.062 16:11:03 -- nvmf/common.sh@478 -- # killprocess 1381271 00:20:33.062 16:11:03 -- common/autotest_common.sh@936 -- # '[' -z 1381271 ']' 00:20:33.062 16:11:03 -- common/autotest_common.sh@940 -- # kill -0 1381271 00:20:33.062 16:11:03 -- common/autotest_common.sh@941 -- # uname 00:20:33.062 16:11:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:33.062 16:11:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1381271 00:20:33.062 16:11:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:33.062 16:11:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:33.062 16:11:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1381271' 00:20:33.062 killing process with pid 1381271 00:20:33.062 16:11:03 -- common/autotest_common.sh@955 -- # kill 1381271 00:20:33.062 16:11:03 -- common/autotest_common.sh@960 -- # wait 1381271 00:20:33.321 16:11:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:33.321 16:11:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:33.321 00:20:33.321 real 0m26.523s 00:20:33.321 user 2m8.314s 00:20:33.321 sys 0m10.023s 00:20:33.321 16:11:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:33.321 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:20:33.321 ************************************ 00:20:33.321 END TEST nvmf_fio_target 00:20:33.321 ************************************ 00:20:33.321 16:11:04 -- nvmf/nvmf.sh@55 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:33.321 16:11:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:33.321 16:11:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:33.321 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:20:33.582 ************************************ 00:20:33.582 START TEST nvmf_bdevio 00:20:33.582 ************************************ 00:20:33.582 16:11:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=rdma 00:20:33.582 * Looking for test storage... 00:20:33.582 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:33.582 16:11:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:33.582 16:11:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:33.582 16:11:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:33.582 16:11:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:33.582 16:11:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:33.582 16:11:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:33.582 16:11:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:33.582 16:11:04 -- scripts/common.sh@335 -- # IFS=.-: 00:20:33.582 16:11:04 -- scripts/common.sh@335 -- # read -ra ver1 00:20:33.582 16:11:04 -- scripts/common.sh@336 -- # IFS=.-: 00:20:33.582 16:11:04 -- scripts/common.sh@336 -- # read -ra ver2 00:20:33.582 16:11:04 -- scripts/common.sh@337 -- # local 'op=<' 00:20:33.582 16:11:04 -- scripts/common.sh@339 -- # ver1_l=2 00:20:33.582 16:11:04 -- scripts/common.sh@340 -- # ver2_l=1 00:20:33.582 16:11:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:33.582 16:11:04 -- scripts/common.sh@343 -- # case "$op" in 00:20:33.582 16:11:04 -- scripts/common.sh@344 -- # : 1 00:20:33.582 16:11:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:33.582 16:11:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:33.582 16:11:04 -- scripts/common.sh@364 -- # decimal 1 00:20:33.582 16:11:04 -- scripts/common.sh@352 -- # local d=1 00:20:33.582 16:11:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:33.582 16:11:04 -- scripts/common.sh@354 -- # echo 1 00:20:33.582 16:11:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:33.582 16:11:04 -- scripts/common.sh@365 -- # decimal 2 00:20:33.582 16:11:04 -- scripts/common.sh@352 -- # local d=2 00:20:33.582 16:11:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:33.582 16:11:04 -- scripts/common.sh@354 -- # echo 2 00:20:33.582 16:11:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:33.582 16:11:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:33.582 16:11:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:33.582 16:11:04 -- scripts/common.sh@367 -- # return 0 00:20:33.582 16:11:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:33.582 16:11:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:33.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.582 --rc genhtml_branch_coverage=1 00:20:33.582 --rc genhtml_function_coverage=1 00:20:33.582 --rc genhtml_legend=1 00:20:33.582 --rc geninfo_all_blocks=1 00:20:33.582 --rc geninfo_unexecuted_blocks=1 00:20:33.582 00:20:33.582 ' 00:20:33.582 16:11:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:33.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.582 --rc genhtml_branch_coverage=1 00:20:33.582 --rc genhtml_function_coverage=1 00:20:33.582 --rc genhtml_legend=1 00:20:33.582 --rc geninfo_all_blocks=1 00:20:33.582 --rc geninfo_unexecuted_blocks=1 00:20:33.582 00:20:33.582 ' 00:20:33.582 16:11:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:33.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.582 --rc genhtml_branch_coverage=1 00:20:33.582 --rc genhtml_function_coverage=1 00:20:33.582 --rc genhtml_legend=1 00:20:33.582 --rc geninfo_all_blocks=1 00:20:33.582 --rc geninfo_unexecuted_blocks=1 00:20:33.582 00:20:33.582 ' 00:20:33.582 16:11:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:33.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:33.582 --rc genhtml_branch_coverage=1 00:20:33.582 --rc genhtml_function_coverage=1 00:20:33.582 --rc genhtml_legend=1 00:20:33.582 --rc geninfo_all_blocks=1 00:20:33.582 --rc geninfo_unexecuted_blocks=1 00:20:33.582 00:20:33.582 ' 00:20:33.582 16:11:04 -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:33.582 16:11:04 -- nvmf/common.sh@7 -- # uname -s 00:20:33.582 16:11:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:33.582 16:11:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:33.582 16:11:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:33.582 16:11:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:33.582 16:11:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:33.582 16:11:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:33.582 16:11:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:33.582 16:11:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:33.582 16:11:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:33.582 16:11:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:33.582 16:11:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:33.583 16:11:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:33.583 16:11:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:33.583 16:11:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:33.583 16:11:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:33.583 16:11:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:33.583 16:11:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:33.583 16:11:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:33.583 16:11:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:33.583 16:11:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.583 16:11:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.583 16:11:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.583 16:11:04 -- paths/export.sh@5 -- # export PATH 00:20:33.583 16:11:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:33.583 16:11:04 -- nvmf/common.sh@46 -- # : 0 00:20:33.583 16:11:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:33.583 16:11:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:33.583 16:11:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:33.583 16:11:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:33.583 16:11:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:33.583 16:11:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:33.583 16:11:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:33.583 16:11:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:33.583 16:11:04 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:33.583 16:11:04 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:33.583 16:11:04 -- target/bdevio.sh@14 -- # nvmftestinit 00:20:33.583 16:11:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:33.583 16:11:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:33.583 16:11:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:33.583 16:11:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:33.583 16:11:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:33.583 16:11:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:33.583 16:11:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:33.583 16:11:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:33.583 16:11:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:33.583 16:11:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:33.583 16:11:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:33.583 16:11:04 -- common/autotest_common.sh@10 -- # set +x 00:20:40.153 16:11:10 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:40.153 16:11:10 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:40.153 16:11:10 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:40.153 16:11:10 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:40.153 16:11:10 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:40.153 16:11:10 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:40.153 16:11:10 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:40.153 16:11:10 -- nvmf/common.sh@294 -- # net_devs=() 00:20:40.153 16:11:10 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:40.153 16:11:10 -- nvmf/common.sh@295 -- # e810=() 00:20:40.153 16:11:10 -- nvmf/common.sh@295 -- # local -ga e810 00:20:40.153 16:11:10 -- nvmf/common.sh@296 -- # x722=() 00:20:40.153 16:11:10 -- nvmf/common.sh@296 -- # local -ga x722 00:20:40.153 16:11:10 -- nvmf/common.sh@297 -- # mlx=() 00:20:40.153 16:11:10 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:40.153 16:11:10 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.153 16:11:10 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:40.153 16:11:10 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:40.153 16:11:10 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:40.153 16:11:10 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:40.153 16:11:10 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:40.153 16:11:10 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:40.153 16:11:10 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:40.153 16:11:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.153 16:11:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:40.153 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:40.154 16:11:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.154 16:11:10 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:40.154 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:40.154 16:11:10 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:40.154 16:11:10 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:40.154 16:11:10 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.154 16:11:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.154 16:11:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.154 16:11:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:40.154 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:40.154 16:11:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.154 16:11:10 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.154 16:11:10 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:40.154 16:11:10 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.154 16:11:10 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:40.154 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:40.154 16:11:10 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.154 16:11:10 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:40.154 16:11:10 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:40.154 16:11:10 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:40.154 16:11:10 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:40.154 16:11:10 -- nvmf/common.sh@57 -- # uname 00:20:40.154 16:11:10 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:40.154 16:11:10 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:40.154 16:11:10 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:40.154 16:11:10 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:40.154 16:11:10 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:40.154 16:11:10 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:40.154 16:11:10 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:40.154 16:11:10 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:40.154 16:11:10 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:40.154 16:11:10 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:40.154 16:11:10 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:40.154 16:11:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.154 16:11:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:40.154 16:11:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:40.154 16:11:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.154 16:11:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:40.154 16:11:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:40.154 16:11:10 -- nvmf/common.sh@104 -- # continue 2 00:20:40.154 16:11:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:40.154 16:11:10 -- nvmf/common.sh@104 -- # continue 2 00:20:40.154 16:11:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:40.154 16:11:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:40.154 16:11:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.154 16:11:10 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:40.154 16:11:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:40.154 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.154 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:40.154 altname enp217s0f0np0 00:20:40.154 altname ens818f0np0 00:20:40.154 inet 192.168.100.8/24 scope global mlx_0_0 00:20:40.154 valid_lft forever preferred_lft forever 00:20:40.154 16:11:10 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:40.154 16:11:10 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:40.154 16:11:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.154 16:11:10 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:40.154 16:11:10 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:40.154 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:40.154 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:40.154 altname enp217s0f1np1 00:20:40.154 altname ens818f1np1 00:20:40.154 inet 192.168.100.9/24 scope global mlx_0_1 00:20:40.154 valid_lft forever preferred_lft forever 00:20:40.154 16:11:10 -- nvmf/common.sh@410 -- # return 0 00:20:40.154 16:11:10 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:40.154 16:11:10 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:40.154 16:11:10 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:40.154 16:11:10 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:40.154 16:11:10 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:40.154 16:11:10 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:40.154 16:11:10 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:40.154 16:11:10 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:40.154 16:11:10 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:40.154 16:11:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:40.154 16:11:10 -- nvmf/common.sh@104 -- # continue 2 00:20:40.154 16:11:10 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:40.154 16:11:10 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:40.154 16:11:10 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:40.154 16:11:10 -- nvmf/common.sh@104 -- # continue 2 00:20:40.154 16:11:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:40.154 16:11:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:40.154 16:11:10 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.154 16:11:10 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:40.154 16:11:10 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:40.154 16:11:10 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:40.154 16:11:10 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:40.154 16:11:10 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:40.154 192.168.100.9' 00:20:40.154 16:11:10 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:40.154 192.168.100.9' 00:20:40.154 16:11:10 -- nvmf/common.sh@445 -- # head -n 1 00:20:40.154 16:11:10 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:40.413 16:11:10 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:40.413 192.168.100.9' 00:20:40.413 16:11:10 -- nvmf/common.sh@446 -- # tail -n +2 00:20:40.413 16:11:10 -- nvmf/common.sh@446 -- # head -n 1 00:20:40.413 16:11:10 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:40.413 16:11:10 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:40.413 16:11:10 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:40.413 16:11:10 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:40.413 16:11:10 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:40.413 16:11:10 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:40.413 16:11:10 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:40.413 16:11:10 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:20:40.413 16:11:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:40.413 16:11:10 -- common/autotest_common.sh@10 -- # set +x 00:20:40.414 16:11:10 -- nvmf/common.sh@469 -- # nvmfpid=1388850 00:20:40.414 16:11:10 -- nvmf/common.sh@470 -- # waitforlisten 1388850 00:20:40.414 16:11:10 -- common/autotest_common.sh@829 -- # '[' -z 1388850 ']' 00:20:40.414 16:11:10 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.414 16:11:10 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:40.414 16:11:10 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.414 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.414 16:11:10 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:40.414 16:11:10 -- common/autotest_common.sh@10 -- # set +x 00:20:40.414 16:11:10 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:20:40.414 [2024-11-20 16:11:11.040231] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:40.414 [2024-11-20 16:11:11.040283] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.414 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.414 [2024-11-20 16:11:11.109552] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.414 [2024-11-20 16:11:11.146129] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:20:40.414 [2024-11-20 16:11:11.146250] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.414 [2024-11-20 16:11:11.146260] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.414 [2024-11-20 16:11:11.146268] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.414 [2024-11-20 16:11:11.146395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:20:40.414 [2024-11-20 16:11:11.146483] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:20:40.414 [2024-11-20 16:11:11.146574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:40.414 [2024-11-20 16:11:11.146574] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:20:41.348 16:11:11 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:41.348 16:11:11 -- common/autotest_common.sh@862 -- # return 0 00:20:41.348 16:11:11 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:20:41.348 16:11:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:41.348 16:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.348 16:11:11 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.348 16:11:11 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:41.348 16:11:11 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.348 16:11:11 -- common/autotest_common.sh@10 -- # set +x 00:20:41.348 [2024-11-20 16:11:11.940240] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x131f9b0/0x1323e80) succeed. 00:20:41.349 [2024-11-20 16:11:11.949391] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1320f50/0x1365520) succeed. 00:20:41.349 16:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.349 16:11:12 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:41.349 16:11:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.349 16:11:12 -- common/autotest_common.sh@10 -- # set +x 00:20:41.349 Malloc0 00:20:41.349 16:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.349 16:11:12 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.349 16:11:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.349 16:11:12 -- common/autotest_common.sh@10 -- # set +x 00:20:41.349 16:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.349 16:11:12 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:41.349 16:11:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.349 16:11:12 -- common/autotest_common.sh@10 -- # set +x 00:20:41.349 16:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.349 16:11:12 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:41.349 16:11:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.349 16:11:12 -- common/autotest_common.sh@10 -- # set +x 00:20:41.349 [2024-11-20 16:11:12.117710] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:20:41.349 16:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.349 16:11:12 -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:20:41.349 16:11:12 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:41.349 16:11:12 -- nvmf/common.sh@520 -- # config=() 00:20:41.349 16:11:12 -- nvmf/common.sh@520 -- # local subsystem config 00:20:41.349 16:11:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:20:41.349 16:11:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:20:41.349 { 00:20:41.349 "params": { 00:20:41.349 "name": "Nvme$subsystem", 00:20:41.349 "trtype": "$TEST_TRANSPORT", 00:20:41.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:41.349 "adrfam": "ipv4", 00:20:41.349 "trsvcid": "$NVMF_PORT", 00:20:41.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:41.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:41.349 "hdgst": ${hdgst:-false}, 00:20:41.349 "ddgst": ${ddgst:-false} 00:20:41.349 }, 00:20:41.349 "method": "bdev_nvme_attach_controller" 00:20:41.349 } 00:20:41.349 EOF 00:20:41.349 )") 00:20:41.349 16:11:12 -- nvmf/common.sh@542 -- # cat 00:20:41.349 16:11:12 -- nvmf/common.sh@544 -- # jq . 00:20:41.349 16:11:12 -- nvmf/common.sh@545 -- # IFS=, 00:20:41.349 16:11:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:20:41.349 "params": { 00:20:41.349 "name": "Nvme1", 00:20:41.349 "trtype": "rdma", 00:20:41.349 "traddr": "192.168.100.8", 00:20:41.349 "adrfam": "ipv4", 00:20:41.349 "trsvcid": "4420", 00:20:41.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:41.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:41.349 "hdgst": false, 00:20:41.349 "ddgst": false 00:20:41.349 }, 00:20:41.349 "method": "bdev_nvme_attach_controller" 00:20:41.349 }' 00:20:41.611 [2024-11-20 16:11:12.167095] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:20:41.611 [2024-11-20 16:11:12.167148] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389138 ] 00:20:41.611 EAL: No free 2048 kB hugepages reported on node 1 00:20:41.611 [2024-11-20 16:11:12.238867] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:41.611 [2024-11-20 16:11:12.277413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:41.611 [2024-11-20 16:11:12.277508] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:41.611 [2024-11-20 16:11:12.277510] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:41.872 [2024-11-20 16:11:12.448997] rpc.c: 181:spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:20:41.872 [2024-11-20 16:11:12.449029] rpc.c: 90:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:20:41.872 I/O targets: 00:20:41.872 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:41.872 00:20:41.872 00:20:41.872 CUnit - A unit testing framework for C - Version 2.1-3 00:20:41.872 http://cunit.sourceforge.net/ 00:20:41.872 00:20:41.872 00:20:41.872 Suite: bdevio tests on: Nvme1n1 00:20:41.873 Test: blockdev write read block ...passed 00:20:41.873 Test: blockdev write zeroes read block ...passed 00:20:41.873 Test: blockdev write zeroes read no split ...passed 00:20:41.873 Test: blockdev write zeroes read split ...passed 00:20:41.873 Test: blockdev write zeroes read split partial ...passed 00:20:41.873 Test: blockdev reset ...[2024-11-20 16:11:12.478852] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:41.873 [2024-11-20 16:11:12.502018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:20:41.873 [2024-11-20 16:11:12.528408] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:41.873 passed 00:20:41.873 Test: blockdev write read 8 blocks ...passed 00:20:41.873 Test: blockdev write read size > 128k ...passed 00:20:41.873 Test: blockdev write read invalid size ...passed 00:20:41.873 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:41.873 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:41.873 Test: blockdev write read max offset ...passed 00:20:41.873 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:41.873 Test: blockdev writev readv 8 blocks ...passed 00:20:41.873 Test: blockdev writev readv 30 x 1block ...passed 00:20:41.873 Test: blockdev writev readv block ...passed 00:20:41.873 Test: blockdev writev readv size > 128k ...passed 00:20:41.873 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:41.873 Test: blockdev comparev and writev ...[2024-11-20 16:11:12.531270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.873 [2024-11-20 16:11:12.531302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.531314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.873 [2024-11-20 16:11:12.531324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.531501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.873 [2024-11-20 16:11:12.531512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.531527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.873 [2024-11-20 16:11:12.531536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.531713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.873 [2024-11-20 16:11:12.531724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.531734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.873 [2024-11-20 16:11:12.531743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.531905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.873 [2024-11-20 16:11:12.531915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.531925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:41.873 [2024-11-20 16:11:12.531934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:41.873 passed 00:20:41.873 Test: blockdev nvme passthru rw ...passed 00:20:41.873 Test: blockdev nvme passthru vendor specific ...[2024-11-20 16:11:12.532201] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.873 [2024-11-20 16:11:12.532213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.532254] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.873 [2024-11-20 16:11:12.532265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.532307] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.873 [2024-11-20 16:11:12.532317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:41.873 [2024-11-20 16:11:12.532358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:20:41.873 [2024-11-20 16:11:12.532368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:41.873 passed 00:20:41.873 Test: blockdev nvme admin passthru ...passed 00:20:41.873 Test: blockdev copy ...passed 00:20:41.873 00:20:41.873 Run Summary: Type Total Ran Passed Failed Inactive 00:20:41.873 suites 1 1 n/a 0 0 00:20:41.873 tests 23 23 23 0 0 00:20:41.873 asserts 152 152 152 0 n/a 00:20:41.873 00:20:41.873 Elapsed time = 0.171 seconds 00:20:42.133 16:11:12 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:42.133 16:11:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.133 16:11:12 -- common/autotest_common.sh@10 -- # set +x 00:20:42.133 16:11:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.133 16:11:12 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:42.133 16:11:12 -- target/bdevio.sh@30 -- # nvmftestfini 00:20:42.133 16:11:12 -- nvmf/common.sh@476 -- # nvmfcleanup 00:20:42.133 16:11:12 -- nvmf/common.sh@116 -- # sync 00:20:42.133 16:11:12 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:20:42.133 16:11:12 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:20:42.133 16:11:12 -- nvmf/common.sh@119 -- # set +e 00:20:42.133 16:11:12 -- nvmf/common.sh@120 -- # for i in {1..20} 00:20:42.133 16:11:12 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:20:42.133 rmmod nvme_rdma 00:20:42.133 rmmod nvme_fabrics 00:20:42.133 16:11:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:20:42.133 16:11:12 -- nvmf/common.sh@123 -- # set -e 00:20:42.133 16:11:12 -- nvmf/common.sh@124 -- # return 0 00:20:42.133 16:11:12 -- nvmf/common.sh@477 -- # '[' -n 1388850 ']' 00:20:42.133 16:11:12 -- nvmf/common.sh@478 -- # killprocess 1388850 00:20:42.133 16:11:12 -- common/autotest_common.sh@936 -- # '[' -z 1388850 ']' 00:20:42.133 16:11:12 -- common/autotest_common.sh@940 -- # kill -0 1388850 00:20:42.133 16:11:12 -- common/autotest_common.sh@941 -- # uname 00:20:42.133 16:11:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:42.133 16:11:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1388850 00:20:42.133 16:11:12 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:20:42.133 16:11:12 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:20:42.133 16:11:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1388850' 00:20:42.133 killing process with pid 1388850 00:20:42.133 16:11:12 -- common/autotest_common.sh@955 -- # kill 1388850 00:20:42.133 16:11:12 -- common/autotest_common.sh@960 -- # wait 1388850 00:20:42.393 16:11:13 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:20:42.393 16:11:13 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:20:42.393 00:20:42.393 real 0m8.966s 00:20:42.393 user 0m10.659s 00:20:42.393 sys 0m5.734s 00:20:42.393 16:11:13 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:20:42.393 16:11:13 -- common/autotest_common.sh@10 -- # set +x 00:20:42.393 ************************************ 00:20:42.393 END TEST nvmf_bdevio 00:20:42.393 ************************************ 00:20:42.393 16:11:13 -- nvmf/nvmf.sh@57 -- # '[' rdma = tcp ']' 00:20:42.393 16:11:13 -- nvmf/nvmf.sh@63 -- # '[' 1 -eq 1 ']' 00:20:42.393 16:11:13 -- nvmf/nvmf.sh@64 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:42.393 16:11:13 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:42.393 16:11:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:42.393 16:11:13 -- common/autotest_common.sh@10 -- # set +x 00:20:42.393 ************************************ 00:20:42.393 START TEST nvmf_fuzz 00:20:42.393 ************************************ 00:20:42.393 16:11:13 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=rdma 00:20:42.653 * Looking for test storage... 00:20:42.653 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:20:42.653 16:11:13 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:20:42.653 16:11:13 -- common/autotest_common.sh@1690 -- # lcov --version 00:20:42.653 16:11:13 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:20:42.653 16:11:13 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:20:42.653 16:11:13 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:20:42.653 16:11:13 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:20:42.653 16:11:13 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:20:42.653 16:11:13 -- scripts/common.sh@335 -- # IFS=.-: 00:20:42.653 16:11:13 -- scripts/common.sh@335 -- # read -ra ver1 00:20:42.653 16:11:13 -- scripts/common.sh@336 -- # IFS=.-: 00:20:42.653 16:11:13 -- scripts/common.sh@336 -- # read -ra ver2 00:20:42.653 16:11:13 -- scripts/common.sh@337 -- # local 'op=<' 00:20:42.653 16:11:13 -- scripts/common.sh@339 -- # ver1_l=2 00:20:42.653 16:11:13 -- scripts/common.sh@340 -- # ver2_l=1 00:20:42.653 16:11:13 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:20:42.653 16:11:13 -- scripts/common.sh@343 -- # case "$op" in 00:20:42.653 16:11:13 -- scripts/common.sh@344 -- # : 1 00:20:42.653 16:11:13 -- scripts/common.sh@363 -- # (( v = 0 )) 00:20:42.653 16:11:13 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:42.653 16:11:13 -- scripts/common.sh@364 -- # decimal 1 00:20:42.653 16:11:13 -- scripts/common.sh@352 -- # local d=1 00:20:42.653 16:11:13 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:42.653 16:11:13 -- scripts/common.sh@354 -- # echo 1 00:20:42.653 16:11:13 -- scripts/common.sh@364 -- # ver1[v]=1 00:20:42.653 16:11:13 -- scripts/common.sh@365 -- # decimal 2 00:20:42.653 16:11:13 -- scripts/common.sh@352 -- # local d=2 00:20:42.653 16:11:13 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:42.653 16:11:13 -- scripts/common.sh@354 -- # echo 2 00:20:42.653 16:11:13 -- scripts/common.sh@365 -- # ver2[v]=2 00:20:42.653 16:11:13 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:20:42.653 16:11:13 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:20:42.653 16:11:13 -- scripts/common.sh@367 -- # return 0 00:20:42.653 16:11:13 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:42.653 16:11:13 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:20:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.653 --rc genhtml_branch_coverage=1 00:20:42.653 --rc genhtml_function_coverage=1 00:20:42.653 --rc genhtml_legend=1 00:20:42.653 --rc geninfo_all_blocks=1 00:20:42.653 --rc geninfo_unexecuted_blocks=1 00:20:42.653 00:20:42.653 ' 00:20:42.653 16:11:13 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:20:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.653 --rc genhtml_branch_coverage=1 00:20:42.653 --rc genhtml_function_coverage=1 00:20:42.653 --rc genhtml_legend=1 00:20:42.653 --rc geninfo_all_blocks=1 00:20:42.653 --rc geninfo_unexecuted_blocks=1 00:20:42.653 00:20:42.653 ' 00:20:42.653 16:11:13 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:20:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.653 --rc genhtml_branch_coverage=1 00:20:42.653 --rc genhtml_function_coverage=1 00:20:42.653 --rc genhtml_legend=1 00:20:42.653 --rc geninfo_all_blocks=1 00:20:42.653 --rc geninfo_unexecuted_blocks=1 00:20:42.653 00:20:42.653 ' 00:20:42.653 16:11:13 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:20:42.653 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:42.653 --rc genhtml_branch_coverage=1 00:20:42.653 --rc genhtml_function_coverage=1 00:20:42.653 --rc genhtml_legend=1 00:20:42.653 --rc geninfo_all_blocks=1 00:20:42.653 --rc geninfo_unexecuted_blocks=1 00:20:42.653 00:20:42.653 ' 00:20:42.653 16:11:13 -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:20:42.653 16:11:13 -- nvmf/common.sh@7 -- # uname -s 00:20:42.653 16:11:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:42.653 16:11:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:42.653 16:11:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:42.653 16:11:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:42.653 16:11:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:42.653 16:11:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:42.653 16:11:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:42.653 16:11:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:42.653 16:11:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:42.653 16:11:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:42.653 16:11:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:20:42.653 16:11:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:20:42.653 16:11:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:42.653 16:11:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:42.653 16:11:13 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:42.653 16:11:13 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:20:42.653 16:11:13 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:42.653 16:11:13 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:42.653 16:11:13 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:42.653 16:11:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.654 16:11:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.654 16:11:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.654 16:11:13 -- paths/export.sh@5 -- # export PATH 00:20:42.654 16:11:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:42.654 16:11:13 -- nvmf/common.sh@46 -- # : 0 00:20:42.654 16:11:13 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:20:42.654 16:11:13 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:20:42.654 16:11:13 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:20:42.654 16:11:13 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:42.654 16:11:13 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:42.654 16:11:13 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:20:42.654 16:11:13 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:20:42.654 16:11:13 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:20:42.654 16:11:13 -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:20:42.654 16:11:13 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:20:42.654 16:11:13 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.654 16:11:13 -- nvmf/common.sh@436 -- # prepare_net_devs 00:20:42.654 16:11:13 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:20:42.654 16:11:13 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:20:42.654 16:11:13 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.654 16:11:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:42.654 16:11:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.654 16:11:13 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:20:42.654 16:11:13 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:20:42.654 16:11:13 -- nvmf/common.sh@284 -- # xtrace_disable 00:20:42.654 16:11:13 -- common/autotest_common.sh@10 -- # set +x 00:20:50.778 16:11:20 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:20:50.778 16:11:20 -- nvmf/common.sh@290 -- # pci_devs=() 00:20:50.778 16:11:20 -- nvmf/common.sh@290 -- # local -a pci_devs 00:20:50.778 16:11:20 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:20:50.778 16:11:20 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:20:50.778 16:11:20 -- nvmf/common.sh@292 -- # pci_drivers=() 00:20:50.778 16:11:20 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:20:50.778 16:11:20 -- nvmf/common.sh@294 -- # net_devs=() 00:20:50.778 16:11:20 -- nvmf/common.sh@294 -- # local -ga net_devs 00:20:50.778 16:11:20 -- nvmf/common.sh@295 -- # e810=() 00:20:50.778 16:11:20 -- nvmf/common.sh@295 -- # local -ga e810 00:20:50.778 16:11:20 -- nvmf/common.sh@296 -- # x722=() 00:20:50.778 16:11:20 -- nvmf/common.sh@296 -- # local -ga x722 00:20:50.778 16:11:20 -- nvmf/common.sh@297 -- # mlx=() 00:20:50.778 16:11:20 -- nvmf/common.sh@297 -- # local -ga mlx 00:20:50.778 16:11:20 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:50.778 16:11:20 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:20:50.778 16:11:20 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:20:50.778 16:11:20 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:20:50.778 16:11:20 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:20:50.778 16:11:20 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:20:50.778 16:11:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:50.778 16:11:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:20:50.778 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:20:50.778 16:11:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:50.778 16:11:20 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:20:50.778 16:11:20 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:20:50.778 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:20:50.778 16:11:20 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:20:50.778 16:11:20 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:20:50.778 16:11:20 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:20:50.778 16:11:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.779 16:11:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:50.779 16:11:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.779 16:11:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:20:50.779 Found net devices under 0000:d9:00.0: mlx_0_0 00:20:50.779 16:11:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.779 16:11:20 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:50.779 16:11:20 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:20:50.779 16:11:20 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:50.779 16:11:20 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:20:50.779 Found net devices under 0000:d9:00.1: mlx_0_1 00:20:50.779 16:11:20 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:20:50.779 16:11:20 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:20:50.779 16:11:20 -- nvmf/common.sh@402 -- # is_hw=yes 00:20:50.779 16:11:20 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@408 -- # rdma_device_init 00:20:50.779 16:11:20 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:20:50.779 16:11:20 -- nvmf/common.sh@57 -- # uname 00:20:50.779 16:11:20 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:20:50.779 16:11:20 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:20:50.779 16:11:20 -- nvmf/common.sh@62 -- # modprobe ib_core 00:20:50.779 16:11:20 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:20:50.779 16:11:20 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:20:50.779 16:11:20 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:20:50.779 16:11:20 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:20:50.779 16:11:20 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:20:50.779 16:11:20 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:20:50.779 16:11:20 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:20:50.779 16:11:20 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:20:50.779 16:11:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:50.779 16:11:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:50.779 16:11:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:50.779 16:11:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:50.779 16:11:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:50.779 16:11:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:50.779 16:11:20 -- nvmf/common.sh@104 -- # continue 2 00:20:50.779 16:11:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:50.779 16:11:20 -- nvmf/common.sh@104 -- # continue 2 00:20:50.779 16:11:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:50.779 16:11:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:20:50.779 16:11:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:50.779 16:11:20 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:20:50.779 16:11:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:20:50.779 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:50.779 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:20:50.779 altname enp217s0f0np0 00:20:50.779 altname ens818f0np0 00:20:50.779 inet 192.168.100.8/24 scope global mlx_0_0 00:20:50.779 valid_lft forever preferred_lft forever 00:20:50.779 16:11:20 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:20:50.779 16:11:20 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:20:50.779 16:11:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:50.779 16:11:20 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:20:50.779 16:11:20 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:20:50.779 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:20:50.779 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:20:50.779 altname enp217s0f1np1 00:20:50.779 altname ens818f1np1 00:20:50.779 inet 192.168.100.9/24 scope global mlx_0_1 00:20:50.779 valid_lft forever preferred_lft forever 00:20:50.779 16:11:20 -- nvmf/common.sh@410 -- # return 0 00:20:50.779 16:11:20 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:20:50.779 16:11:20 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:20:50.779 16:11:20 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:20:50.779 16:11:20 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:20:50.779 16:11:20 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:20:50.779 16:11:20 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:20:50.779 16:11:20 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:20:50.779 16:11:20 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:20:50.779 16:11:20 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:20:50.779 16:11:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:20:50.779 16:11:20 -- nvmf/common.sh@104 -- # continue 2 00:20:50.779 16:11:20 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:20:50.779 16:11:20 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:20:50.779 16:11:20 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:20:50.779 16:11:20 -- nvmf/common.sh@104 -- # continue 2 00:20:50.779 16:11:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:50.779 16:11:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:20:50.779 16:11:20 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:50.779 16:11:20 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:20:50.779 16:11:20 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:20:50.779 16:11:20 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:20:50.779 16:11:20 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:20:50.779 16:11:20 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:20:50.779 192.168.100.9' 00:20:50.779 16:11:20 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:20:50.779 192.168.100.9' 00:20:50.779 16:11:20 -- nvmf/common.sh@445 -- # head -n 1 00:20:50.779 16:11:20 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:20:50.779 16:11:20 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:20:50.779 192.168.100.9' 00:20:50.779 16:11:20 -- nvmf/common.sh@446 -- # tail -n +2 00:20:50.779 16:11:20 -- nvmf/common.sh@446 -- # head -n 1 00:20:50.779 16:11:20 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:20:50.779 16:11:20 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:20:50.779 16:11:20 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:20:50.779 16:11:20 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:20:50.779 16:11:20 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:20:50.780 16:11:20 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:20:50.780 16:11:20 -- target/fabrics_fuzz.sh@14 -- # nvmfpid=1392588 00:20:50.780 16:11:20 -- target/fabrics_fuzz.sh@13 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:50.780 16:11:20 -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:50.780 16:11:20 -- target/fabrics_fuzz.sh@18 -- # waitforlisten 1392588 00:20:50.780 16:11:20 -- common/autotest_common.sh@829 -- # '[' -z 1392588 ']' 00:20:50.780 16:11:20 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:50.780 16:11:20 -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:50.780 16:11:20 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:50.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:50.780 16:11:20 -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:50.780 16:11:20 -- common/autotest_common.sh@10 -- # set +x 00:20:50.780 16:11:21 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:50.780 16:11:21 -- common/autotest_common.sh@862 -- # return 0 00:20:50.780 16:11:21 -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:20:50.780 16:11:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.780 16:11:21 -- common/autotest_common.sh@10 -- # set +x 00:20:50.780 16:11:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.780 16:11:21 -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:20:50.780 16:11:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.780 16:11:21 -- common/autotest_common.sh@10 -- # set +x 00:20:50.780 Malloc0 00:20:50.780 16:11:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.780 16:11:21 -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:50.780 16:11:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.780 16:11:21 -- common/autotest_common.sh@10 -- # set +x 00:20:50.780 16:11:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.780 16:11:21 -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:50.780 16:11:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.780 16:11:21 -- common/autotest_common.sh@10 -- # set +x 00:20:50.780 16:11:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.780 16:11:21 -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:20:50.780 16:11:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.780 16:11:21 -- common/autotest_common.sh@10 -- # set +x 00:20:50.780 16:11:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.780 16:11:21 -- target/fabrics_fuzz.sh@27 -- # trid='trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' 00:20:50.780 16:11:21 -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -t 30 -S 123456 -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -N -a 00:21:22.889 Fuzzing completed. Shutting down the fuzz application 00:21:22.889 00:21:22.889 Dumping successful admin opcodes: 00:21:22.889 8, 9, 10, 24, 00:21:22.889 Dumping successful io opcodes: 00:21:22.889 0, 9, 00:21:22.889 NS: 0x200003af1f00 I/O qp, Total commands completed: 1004473, total successful commands: 5884, random_seed: 1710237312 00:21:22.889 NS: 0x200003af1f00 admin qp, Total commands completed: 126928, total successful commands: 1036, random_seed: 536885184 00:21:22.889 16:11:51 -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -r /var/tmp/nvme_fuzz -F 'trtype:rdma adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:192.168.100.8 trsvcid:4420' -j /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:21:22.889 Fuzzing completed. Shutting down the fuzz application 00:21:22.889 00:21:22.889 Dumping successful admin opcodes: 00:21:22.889 24, 00:21:22.889 Dumping successful io opcodes: 00:21:22.889 00:21:22.889 NS: 0x200003af1f00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2430626942 00:21:22.889 NS: 0x200003af1f00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2430702570 00:21:22.889 16:11:52 -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:22.889 16:11:52 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:22.889 16:11:52 -- common/autotest_common.sh@10 -- # set +x 00:21:22.889 16:11:52 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:22.889 16:11:52 -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:21:22.889 16:11:52 -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:21:22.889 16:11:52 -- nvmf/common.sh@476 -- # nvmfcleanup 00:21:22.889 16:11:52 -- nvmf/common.sh@116 -- # sync 00:21:22.889 16:11:52 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:21:22.889 16:11:52 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:21:22.889 16:11:52 -- nvmf/common.sh@119 -- # set +e 00:21:22.889 16:11:52 -- nvmf/common.sh@120 -- # for i in {1..20} 00:21:22.889 16:11:52 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:21:22.889 rmmod nvme_rdma 00:21:22.889 rmmod nvme_fabrics 00:21:22.889 16:11:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:21:22.889 16:11:53 -- nvmf/common.sh@123 -- # set -e 00:21:22.889 16:11:53 -- nvmf/common.sh@124 -- # return 0 00:21:22.889 16:11:53 -- nvmf/common.sh@477 -- # '[' -n 1392588 ']' 00:21:22.889 16:11:53 -- nvmf/common.sh@478 -- # killprocess 1392588 00:21:22.889 16:11:53 -- common/autotest_common.sh@936 -- # '[' -z 1392588 ']' 00:21:22.889 16:11:53 -- common/autotest_common.sh@940 -- # kill -0 1392588 00:21:22.889 16:11:53 -- common/autotest_common.sh@941 -- # uname 00:21:22.889 16:11:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:22.889 16:11:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1392588 00:21:22.889 16:11:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:22.889 16:11:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:22.889 16:11:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1392588' 00:21:22.889 killing process with pid 1392588 00:21:22.889 16:11:53 -- common/autotest_common.sh@955 -- # kill 1392588 00:21:22.889 16:11:53 -- common/autotest_common.sh@960 -- # wait 1392588 00:21:22.889 16:11:53 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:21:22.889 16:11:53 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:21:22.889 16:11:53 -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:21:22.889 00:21:22.889 real 0m40.250s 00:21:22.889 user 0m50.077s 00:21:22.889 sys 0m21.159s 00:21:22.889 16:11:53 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:21:22.889 16:11:53 -- common/autotest_common.sh@10 -- # set +x 00:21:22.889 ************************************ 00:21:22.889 END TEST nvmf_fuzz 00:21:22.889 ************************************ 00:21:22.889 16:11:53 -- nvmf/nvmf.sh@65 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:22.889 16:11:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:22.889 16:11:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:22.889 16:11:53 -- common/autotest_common.sh@10 -- # set +x 00:21:22.889 ************************************ 00:21:22.889 START TEST nvmf_multiconnection 00:21:22.889 ************************************ 00:21:22.889 16:11:53 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=rdma 00:21:22.889 * Looking for test storage... 00:21:22.889 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:21:22.889 16:11:53 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:21:22.889 16:11:53 -- common/autotest_common.sh@1690 -- # lcov --version 00:21:22.889 16:11:53 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:21:22.889 16:11:53 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:21:22.889 16:11:53 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:21:22.889 16:11:53 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:21:22.889 16:11:53 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:21:22.889 16:11:53 -- scripts/common.sh@335 -- # IFS=.-: 00:21:22.889 16:11:53 -- scripts/common.sh@335 -- # read -ra ver1 00:21:22.889 16:11:53 -- scripts/common.sh@336 -- # IFS=.-: 00:21:22.889 16:11:53 -- scripts/common.sh@336 -- # read -ra ver2 00:21:22.889 16:11:53 -- scripts/common.sh@337 -- # local 'op=<' 00:21:22.889 16:11:53 -- scripts/common.sh@339 -- # ver1_l=2 00:21:22.889 16:11:53 -- scripts/common.sh@340 -- # ver2_l=1 00:21:22.889 16:11:53 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:21:22.889 16:11:53 -- scripts/common.sh@343 -- # case "$op" in 00:21:22.889 16:11:53 -- scripts/common.sh@344 -- # : 1 00:21:22.889 16:11:53 -- scripts/common.sh@363 -- # (( v = 0 )) 00:21:22.889 16:11:53 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.889 16:11:53 -- scripts/common.sh@364 -- # decimal 1 00:21:22.889 16:11:53 -- scripts/common.sh@352 -- # local d=1 00:21:22.889 16:11:53 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:22.889 16:11:53 -- scripts/common.sh@354 -- # echo 1 00:21:22.889 16:11:53 -- scripts/common.sh@364 -- # ver1[v]=1 00:21:22.889 16:11:53 -- scripts/common.sh@365 -- # decimal 2 00:21:22.889 16:11:53 -- scripts/common.sh@352 -- # local d=2 00:21:22.889 16:11:53 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:22.889 16:11:53 -- scripts/common.sh@354 -- # echo 2 00:21:22.889 16:11:53 -- scripts/common.sh@365 -- # ver2[v]=2 00:21:22.889 16:11:53 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:21:22.889 16:11:53 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:21:22.889 16:11:53 -- scripts/common.sh@367 -- # return 0 00:21:22.889 16:11:53 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.889 16:11:53 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:21:22.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.889 --rc genhtml_branch_coverage=1 00:21:22.889 --rc genhtml_function_coverage=1 00:21:22.889 --rc genhtml_legend=1 00:21:22.889 --rc geninfo_all_blocks=1 00:21:22.889 --rc geninfo_unexecuted_blocks=1 00:21:22.889 00:21:22.889 ' 00:21:22.889 16:11:53 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:21:22.889 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.889 --rc genhtml_branch_coverage=1 00:21:22.889 --rc genhtml_function_coverage=1 00:21:22.889 --rc genhtml_legend=1 00:21:22.889 --rc geninfo_all_blocks=1 00:21:22.889 --rc geninfo_unexecuted_blocks=1 00:21:22.889 00:21:22.889 ' 00:21:22.889 16:11:53 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:21:22.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.890 --rc genhtml_branch_coverage=1 00:21:22.890 --rc genhtml_function_coverage=1 00:21:22.890 --rc genhtml_legend=1 00:21:22.890 --rc geninfo_all_blocks=1 00:21:22.890 --rc geninfo_unexecuted_blocks=1 00:21:22.890 00:21:22.890 ' 00:21:22.890 16:11:53 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:21:22.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.890 --rc genhtml_branch_coverage=1 00:21:22.890 --rc genhtml_function_coverage=1 00:21:22.890 --rc genhtml_legend=1 00:21:22.890 --rc geninfo_all_blocks=1 00:21:22.890 --rc geninfo_unexecuted_blocks=1 00:21:22.890 00:21:22.890 ' 00:21:22.890 16:11:53 -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:21:22.890 16:11:53 -- nvmf/common.sh@7 -- # uname -s 00:21:22.890 16:11:53 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:22.890 16:11:53 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:22.890 16:11:53 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:22.890 16:11:53 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:22.890 16:11:53 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:22.890 16:11:53 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:22.890 16:11:53 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:22.890 16:11:53 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:22.890 16:11:53 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:22.890 16:11:53 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:22.890 16:11:53 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:21:22.890 16:11:53 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:21:22.890 16:11:53 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:22.890 16:11:53 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:22.890 16:11:53 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:22.890 16:11:53 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:21:22.890 16:11:53 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:22.890 16:11:53 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.890 16:11:53 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.890 16:11:53 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.890 16:11:53 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.890 16:11:53 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.890 16:11:53 -- paths/export.sh@5 -- # export PATH 00:21:22.890 16:11:53 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.890 16:11:53 -- nvmf/common.sh@46 -- # : 0 00:21:22.890 16:11:53 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:21:22.890 16:11:53 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:21:22.890 16:11:53 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:21:22.890 16:11:53 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:22.890 16:11:53 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:22.890 16:11:53 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:21:22.890 16:11:53 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:21:22.890 16:11:53 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:21:22.890 16:11:53 -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:22.890 16:11:53 -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:22.890 16:11:53 -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:21:22.890 16:11:53 -- target/multiconnection.sh@16 -- # nvmftestinit 00:21:22.890 16:11:53 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:21:22.890 16:11:53 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:22.890 16:11:53 -- nvmf/common.sh@436 -- # prepare_net_devs 00:21:22.890 16:11:53 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:21:22.890 16:11:53 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:21:22.890 16:11:53 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:22.890 16:11:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:22.890 16:11:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:22.890 16:11:53 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:21:22.890 16:11:53 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:21:22.890 16:11:53 -- nvmf/common.sh@284 -- # xtrace_disable 00:21:22.890 16:11:53 -- common/autotest_common.sh@10 -- # set +x 00:21:29.462 16:12:00 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:21:29.463 16:12:00 -- nvmf/common.sh@290 -- # pci_devs=() 00:21:29.463 16:12:00 -- nvmf/common.sh@290 -- # local -a pci_devs 00:21:29.463 16:12:00 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:21:29.463 16:12:00 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:21:29.463 16:12:00 -- nvmf/common.sh@292 -- # pci_drivers=() 00:21:29.463 16:12:00 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:21:29.463 16:12:00 -- nvmf/common.sh@294 -- # net_devs=() 00:21:29.463 16:12:00 -- nvmf/common.sh@294 -- # local -ga net_devs 00:21:29.463 16:12:00 -- nvmf/common.sh@295 -- # e810=() 00:21:29.463 16:12:00 -- nvmf/common.sh@295 -- # local -ga e810 00:21:29.463 16:12:00 -- nvmf/common.sh@296 -- # x722=() 00:21:29.463 16:12:00 -- nvmf/common.sh@296 -- # local -ga x722 00:21:29.463 16:12:00 -- nvmf/common.sh@297 -- # mlx=() 00:21:29.463 16:12:00 -- nvmf/common.sh@297 -- # local -ga mlx 00:21:29.463 16:12:00 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:29.463 16:12:00 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:21:29.463 16:12:00 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:21:29.463 16:12:00 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:21:29.463 16:12:00 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:21:29.463 16:12:00 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:21:29.463 16:12:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:21:29.463 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:21:29.463 16:12:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.463 16:12:00 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:21:29.463 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:21:29.463 16:12:00 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:21:29.463 16:12:00 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:21:29.463 16:12:00 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.463 16:12:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:29.463 16:12:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.463 16:12:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:21:29.463 Found net devices under 0000:d9:00.0: mlx_0_0 00:21:29.463 16:12:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.463 16:12:00 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:29.463 16:12:00 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:21:29.463 16:12:00 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:29.463 16:12:00 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:21:29.463 Found net devices under 0000:d9:00.1: mlx_0_1 00:21:29.463 16:12:00 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:21:29.463 16:12:00 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:21:29.463 16:12:00 -- nvmf/common.sh@402 -- # is_hw=yes 00:21:29.463 16:12:00 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@408 -- # rdma_device_init 00:21:29.463 16:12:00 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:21:29.463 16:12:00 -- nvmf/common.sh@57 -- # uname 00:21:29.463 16:12:00 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:21:29.463 16:12:00 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:21:29.463 16:12:00 -- nvmf/common.sh@62 -- # modprobe ib_core 00:21:29.463 16:12:00 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:21:29.463 16:12:00 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:21:29.463 16:12:00 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:21:29.463 16:12:00 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:21:29.463 16:12:00 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:21:29.463 16:12:00 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:21:29.463 16:12:00 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:21:29.463 16:12:00 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:21:29.463 16:12:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.463 16:12:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:29.463 16:12:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:29.463 16:12:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.463 16:12:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:29.463 16:12:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:29.463 16:12:00 -- nvmf/common.sh@104 -- # continue 2 00:21:29.463 16:12:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:29.463 16:12:00 -- nvmf/common.sh@104 -- # continue 2 00:21:29.463 16:12:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:29.463 16:12:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:21:29.463 16:12:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:29.463 16:12:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:29.463 16:12:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.463 16:12:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.463 16:12:00 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:21:29.463 16:12:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:21:29.463 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.463 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:21:29.463 altname enp217s0f0np0 00:21:29.463 altname ens818f0np0 00:21:29.463 inet 192.168.100.8/24 scope global mlx_0_0 00:21:29.463 valid_lft forever preferred_lft forever 00:21:29.463 16:12:00 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:21:29.463 16:12:00 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:21:29.463 16:12:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:29.463 16:12:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:29.463 16:12:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.463 16:12:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.463 16:12:00 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:21:29.463 16:12:00 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:21:29.463 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:21:29.463 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:21:29.463 altname enp217s0f1np1 00:21:29.463 altname ens818f1np1 00:21:29.463 inet 192.168.100.9/24 scope global mlx_0_1 00:21:29.463 valid_lft forever preferred_lft forever 00:21:29.463 16:12:00 -- nvmf/common.sh@410 -- # return 0 00:21:29.463 16:12:00 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:21:29.463 16:12:00 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:21:29.463 16:12:00 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:21:29.463 16:12:00 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:21:29.463 16:12:00 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:21:29.463 16:12:00 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:21:29.463 16:12:00 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:21:29.463 16:12:00 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:21:29.463 16:12:00 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:21:29.463 16:12:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:21:29.463 16:12:00 -- nvmf/common.sh@104 -- # continue 2 00:21:29.463 16:12:00 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:21:29.463 16:12:00 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:21:29.463 16:12:00 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:21:29.463 16:12:00 -- nvmf/common.sh@104 -- # continue 2 00:21:29.463 16:12:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:29.463 16:12:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:21:29.464 16:12:00 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:21:29.464 16:12:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:21:29.464 16:12:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.464 16:12:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.464 16:12:00 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:21:29.464 16:12:00 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:21:29.464 16:12:00 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:21:29.464 16:12:00 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:21:29.464 16:12:00 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:21:29.464 16:12:00 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:21:29.723 16:12:00 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:21:29.723 192.168.100.9' 00:21:29.723 16:12:00 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:21:29.723 192.168.100.9' 00:21:29.723 16:12:00 -- nvmf/common.sh@445 -- # head -n 1 00:21:29.723 16:12:00 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:21:29.723 16:12:00 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:21:29.723 192.168.100.9' 00:21:29.723 16:12:00 -- nvmf/common.sh@446 -- # tail -n +2 00:21:29.723 16:12:00 -- nvmf/common.sh@446 -- # head -n 1 00:21:29.723 16:12:00 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:21:29.723 16:12:00 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:21:29.723 16:12:00 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:21:29.723 16:12:00 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:21:29.723 16:12:00 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:21:29.723 16:12:00 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:21:29.723 16:12:00 -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:21:29.723 16:12:00 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:21:29.723 16:12:00 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:29.723 16:12:00 -- common/autotest_common.sh@10 -- # set +x 00:21:29.723 16:12:00 -- nvmf/common.sh@469 -- # nvmfpid=1401515 00:21:29.723 16:12:00 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:29.723 16:12:00 -- nvmf/common.sh@470 -- # waitforlisten 1401515 00:21:29.723 16:12:00 -- common/autotest_common.sh@829 -- # '[' -z 1401515 ']' 00:21:29.723 16:12:00 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.723 16:12:00 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:29.723 16:12:00 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.723 16:12:00 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:29.723 16:12:00 -- common/autotest_common.sh@10 -- # set +x 00:21:29.723 [2024-11-20 16:12:00.366688] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:21:29.723 [2024-11-20 16:12:00.366744] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.723 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.723 [2024-11-20 16:12:00.440280] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:29.723 [2024-11-20 16:12:00.479715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:21:29.723 [2024-11-20 16:12:00.479820] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.723 [2024-11-20 16:12:00.479830] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.723 [2024-11-20 16:12:00.479839] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.723 [2024-11-20 16:12:00.479881] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:29.723 [2024-11-20 16:12:00.479990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:29.723 [2024-11-20 16:12:00.480075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:29.723 [2024-11-20 16:12:00.480076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.660 16:12:01 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:30.660 16:12:01 -- common/autotest_common.sh@862 -- # return 0 00:21:30.660 16:12:01 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:21:30.660 16:12:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:30.660 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.660 16:12:01 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:30.660 16:12:01 -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:21:30.660 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.660 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.660 [2024-11-20 16:12:01.271762] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1cb60d0/0x1cba5a0) succeed. 00:21:30.660 [2024-11-20 16:12:01.280850] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1cb7670/0x1cfbc40) succeed. 00:21:30.660 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.660 16:12:01 -- target/multiconnection.sh@21 -- # seq 1 11 00:21:30.660 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.660 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:30.660 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.660 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.660 Malloc1 00:21:30.660 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.660 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:21:30.660 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.660 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.660 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.660 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:30.660 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.660 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.660 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.660 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:21:30.660 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.660 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.660 [2024-11-20 16:12:01.458700] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:21:30.660 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.660 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.660 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:21:30.660 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.919 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.919 Malloc2 00:21:30.919 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.919 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:21:30.919 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.919 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.919 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.919 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:21:30.919 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.919 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.919 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.919 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:21:30.919 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.919 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.919 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.919 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.919 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:21:30.919 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.919 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.919 Malloc3 00:21:30.919 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.919 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:21:30.919 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.919 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.919 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.919 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.920 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 Malloc4 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t rdma -a 192.168.100.8 -s 4420 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.920 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 Malloc5 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t rdma -a 192.168.100.8 -s 4420 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.920 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 Malloc6 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t rdma -a 192.168.100.8 -s 4420 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:30.920 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 Malloc7 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:30.920 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:30.920 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:21:30.920 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:30.920 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t rdma -a 192.168.100.8 -s 4420 00:21:31.179 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.179 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.179 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:21:31.179 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.179 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 Malloc8 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:21:31.179 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.179 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:21:31.179 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.179 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t rdma -a 192.168.100.8 -s 4420 00:21:31.179 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.179 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.179 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:21:31.179 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.179 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 Malloc9 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:21:31.179 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.179 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:21:31.179 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.179 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t rdma -a 192.168.100.8 -s 4420 00:21:31.179 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.179 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.179 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.179 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.179 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:21:31.180 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.180 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.180 Malloc10 00:21:31.180 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.180 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:21:31.180 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.180 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.180 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.180 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:21:31.180 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.180 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.180 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.180 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t rdma -a 192.168.100.8 -s 4420 00:21:31.180 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.180 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.180 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.180 16:12:01 -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.180 16:12:01 -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:21:31.180 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.180 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.180 Malloc11 00:21:31.180 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.180 16:12:01 -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:21:31.180 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.180 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.180 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.180 16:12:01 -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:21:31.180 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.180 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.180 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.180 16:12:01 -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t rdma -a 192.168.100.8 -s 4420 00:21:31.180 16:12:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:31.180 16:12:01 -- common/autotest_common.sh@10 -- # set +x 00:21:31.180 16:12:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:31.180 16:12:01 -- target/multiconnection.sh@28 -- # seq 1 11 00:21:31.180 16:12:01 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:31.180 16:12:01 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:21:32.117 16:12:02 -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:21:32.117 16:12:02 -- common/autotest_common.sh@1187 -- # local i=0 00:21:32.117 16:12:02 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:32.117 16:12:02 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:32.117 16:12:02 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:34.655 16:12:04 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:34.655 16:12:04 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:34.655 16:12:04 -- common/autotest_common.sh@1196 -- # grep -c SPDK1 00:21:34.655 16:12:04 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:34.655 16:12:04 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:34.655 16:12:04 -- common/autotest_common.sh@1197 -- # return 0 00:21:34.655 16:12:04 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:34.655 16:12:04 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode2 -a 192.168.100.8 -s 4420 00:21:35.224 16:12:05 -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:21:35.224 16:12:05 -- common/autotest_common.sh@1187 -- # local i=0 00:21:35.224 16:12:05 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:35.224 16:12:05 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:35.224 16:12:05 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:37.132 16:12:07 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:37.132 16:12:07 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:37.132 16:12:07 -- common/autotest_common.sh@1196 -- # grep -c SPDK2 00:21:37.392 16:12:07 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:37.392 16:12:07 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:37.392 16:12:07 -- common/autotest_common.sh@1197 -- # return 0 00:21:37.392 16:12:07 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:37.392 16:12:07 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode3 -a 192.168.100.8 -s 4420 00:21:38.333 16:12:08 -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:21:38.333 16:12:08 -- common/autotest_common.sh@1187 -- # local i=0 00:21:38.333 16:12:08 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:38.333 16:12:08 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:38.333 16:12:08 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:40.241 16:12:10 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:40.241 16:12:10 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:40.241 16:12:10 -- common/autotest_common.sh@1196 -- # grep -c SPDK3 00:21:40.241 16:12:10 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:40.241 16:12:10 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:40.241 16:12:10 -- common/autotest_common.sh@1197 -- # return 0 00:21:40.241 16:12:10 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:40.241 16:12:10 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode4 -a 192.168.100.8 -s 4420 00:21:41.178 16:12:11 -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:21:41.178 16:12:11 -- common/autotest_common.sh@1187 -- # local i=0 00:21:41.178 16:12:11 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:41.178 16:12:11 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:41.178 16:12:11 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:43.716 16:12:13 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:43.716 16:12:13 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:43.716 16:12:13 -- common/autotest_common.sh@1196 -- # grep -c SPDK4 00:21:43.716 16:12:13 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:43.716 16:12:13 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:43.716 16:12:13 -- common/autotest_common.sh@1197 -- # return 0 00:21:43.716 16:12:13 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:43.716 16:12:13 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode5 -a 192.168.100.8 -s 4420 00:21:44.284 16:12:14 -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:21:44.284 16:12:14 -- common/autotest_common.sh@1187 -- # local i=0 00:21:44.284 16:12:14 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:44.284 16:12:14 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:44.284 16:12:14 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:46.189 16:12:16 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:46.189 16:12:16 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:46.189 16:12:16 -- common/autotest_common.sh@1196 -- # grep -c SPDK5 00:21:46.448 16:12:17 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:46.448 16:12:17 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:46.448 16:12:17 -- common/autotest_common.sh@1197 -- # return 0 00:21:46.448 16:12:17 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:46.448 16:12:17 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode6 -a 192.168.100.8 -s 4420 00:21:47.385 16:12:17 -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:21:47.385 16:12:17 -- common/autotest_common.sh@1187 -- # local i=0 00:21:47.385 16:12:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:47.385 16:12:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:47.385 16:12:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:49.292 16:12:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:49.292 16:12:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:49.292 16:12:19 -- common/autotest_common.sh@1196 -- # grep -c SPDK6 00:21:49.292 16:12:20 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:49.292 16:12:20 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:49.292 16:12:20 -- common/autotest_common.sh@1197 -- # return 0 00:21:49.292 16:12:20 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:49.292 16:12:20 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode7 -a 192.168.100.8 -s 4420 00:21:50.228 16:12:21 -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:21:50.228 16:12:21 -- common/autotest_common.sh@1187 -- # local i=0 00:21:50.228 16:12:21 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:50.228 16:12:21 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:50.228 16:12:21 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:52.762 16:12:23 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:52.762 16:12:23 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:52.762 16:12:23 -- common/autotest_common.sh@1196 -- # grep -c SPDK7 00:21:52.762 16:12:23 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:52.762 16:12:23 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:52.762 16:12:23 -- common/autotest_common.sh@1197 -- # return 0 00:21:52.762 16:12:23 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:52.762 16:12:23 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode8 -a 192.168.100.8 -s 4420 00:21:53.328 16:12:24 -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:21:53.328 16:12:24 -- common/autotest_common.sh@1187 -- # local i=0 00:21:53.328 16:12:24 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:53.328 16:12:24 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:53.328 16:12:24 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:55.233 16:12:26 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:55.233 16:12:26 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:55.233 16:12:26 -- common/autotest_common.sh@1196 -- # grep -c SPDK8 00:21:55.493 16:12:26 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:55.493 16:12:26 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:55.493 16:12:26 -- common/autotest_common.sh@1197 -- # return 0 00:21:55.493 16:12:26 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:55.493 16:12:26 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode9 -a 192.168.100.8 -s 4420 00:21:56.500 16:12:27 -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:21:56.500 16:12:27 -- common/autotest_common.sh@1187 -- # local i=0 00:21:56.500 16:12:27 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:56.500 16:12:27 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:56.500 16:12:27 -- common/autotest_common.sh@1194 -- # sleep 2 00:21:58.408 16:12:29 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:21:58.408 16:12:29 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:21:58.408 16:12:29 -- common/autotest_common.sh@1196 -- # grep -c SPDK9 00:21:58.408 16:12:29 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:21:58.408 16:12:29 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:21:58.408 16:12:29 -- common/autotest_common.sh@1197 -- # return 0 00:21:58.408 16:12:29 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:21:58.408 16:12:29 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode10 -a 192.168.100.8 -s 4420 00:21:59.343 16:12:30 -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:21:59.343 16:12:30 -- common/autotest_common.sh@1187 -- # local i=0 00:21:59.343 16:12:30 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:21:59.343 16:12:30 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:21:59.343 16:12:30 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:01.880 16:12:32 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:01.880 16:12:32 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:01.880 16:12:32 -- common/autotest_common.sh@1196 -- # grep -c SPDK10 00:22:01.880 16:12:32 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:01.880 16:12:32 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:01.880 16:12:32 -- common/autotest_common.sh@1197 -- # return 0 00:22:01.880 16:12:32 -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:01.880 16:12:32 -- target/multiconnection.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode11 -a 192.168.100.8 -s 4420 00:22:02.448 16:12:33 -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:22:02.448 16:12:33 -- common/autotest_common.sh@1187 -- # local i=0 00:22:02.448 16:12:33 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:02.448 16:12:33 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:02.448 16:12:33 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:04.353 16:12:35 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:04.353 16:12:35 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:04.353 16:12:35 -- common/autotest_common.sh@1196 -- # grep -c SPDK11 00:22:04.353 16:12:35 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:04.353 16:12:35 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:04.353 16:12:35 -- common/autotest_common.sh@1197 -- # return 0 00:22:04.353 16:12:35 -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:22:04.353 [global] 00:22:04.353 thread=1 00:22:04.353 invalidate=1 00:22:04.353 rw=read 00:22:04.353 time_based=1 00:22:04.353 runtime=10 00:22:04.353 ioengine=libaio 00:22:04.353 direct=1 00:22:04.353 bs=262144 00:22:04.353 iodepth=64 00:22:04.353 norandommap=1 00:22:04.353 numjobs=1 00:22:04.353 00:22:04.353 [job0] 00:22:04.353 filename=/dev/nvme0n1 00:22:04.353 [job1] 00:22:04.353 filename=/dev/nvme10n1 00:22:04.353 [job2] 00:22:04.353 filename=/dev/nvme1n1 00:22:04.353 [job3] 00:22:04.353 filename=/dev/nvme2n1 00:22:04.353 [job4] 00:22:04.353 filename=/dev/nvme3n1 00:22:04.353 [job5] 00:22:04.353 filename=/dev/nvme4n1 00:22:04.353 [job6] 00:22:04.353 filename=/dev/nvme5n1 00:22:04.353 [job7] 00:22:04.353 filename=/dev/nvme6n1 00:22:04.353 [job8] 00:22:04.353 filename=/dev/nvme7n1 00:22:04.353 [job9] 00:22:04.353 filename=/dev/nvme8n1 00:22:04.627 [job10] 00:22:04.627 filename=/dev/nvme9n1 00:22:04.627 Could not set queue depth (nvme0n1) 00:22:04.627 Could not set queue depth (nvme10n1) 00:22:04.627 Could not set queue depth (nvme1n1) 00:22:04.627 Could not set queue depth (nvme2n1) 00:22:04.627 Could not set queue depth (nvme3n1) 00:22:04.627 Could not set queue depth (nvme4n1) 00:22:04.627 Could not set queue depth (nvme5n1) 00:22:04.627 Could not set queue depth (nvme6n1) 00:22:04.627 Could not set queue depth (nvme7n1) 00:22:04.627 Could not set queue depth (nvme8n1) 00:22:04.627 Could not set queue depth (nvme9n1) 00:22:04.888 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:04.888 fio-3.35 00:22:04.888 Starting 11 threads 00:22:17.102 00:22:17.102 job0: (groupid=0, jobs=1): err= 0: pid=1408545: Wed Nov 20 16:12:46 2024 00:22:17.102 read: IOPS=960, BW=240MiB/s (252MB/s)(2416MiB/10058msec) 00:22:17.102 slat (usec): min=12, max=31652, avg=1025.30, stdev=2914.27 00:22:17.102 clat (msec): min=11, max=125, avg=65.50, stdev= 9.60 00:22:17.102 lat (msec): min=11, max=125, avg=66.53, stdev=10.06 00:22:17.102 clat percentiles (msec): 00:22:17.102 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 60], 20.00th=[ 61], 00:22:17.102 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 65], 00:22:17.102 | 70.00th=[ 70], 80.00th=[ 73], 90.00th=[ 79], 95.00th=[ 81], 00:22:17.102 | 99.00th=[ 89], 99.50th=[ 96], 99.90th=[ 123], 99.95th=[ 126], 00:22:17.102 | 99.99th=[ 126] 00:22:17.102 bw ( KiB/s): min=198144, max=294912, per=6.14%, avg=245811.20, stdev=25717.17, samples=20 00:22:17.102 iops : min= 774, max= 1152, avg=960.20, stdev=100.46, samples=20 00:22:17.102 lat (msec) : 20=0.30%, 50=5.31%, 100=94.07%, 250=0.32% 00:22:17.102 cpu : usr=0.40%, sys=4.08%, ctx=1921, majf=0, minf=3659 00:22:17.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:17.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.102 issued rwts: total=9665,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.102 job1: (groupid=0, jobs=1): err= 0: pid=1408547: Wed Nov 20 16:12:46 2024 00:22:17.102 read: IOPS=1906, BW=477MiB/s (500MB/s)(4783MiB/10036msec) 00:22:17.102 slat (usec): min=10, max=14608, avg=516.76, stdev=1204.28 00:22:17.102 clat (usec): min=11315, max=77390, avg=33018.91, stdev=6714.60 00:22:17.102 lat (usec): min=11592, max=89790, avg=33535.67, stdev=6870.77 00:22:17.102 clat percentiles (usec): 00:22:17.102 | 1.00th=[27132], 5.00th=[28443], 10.00th=[28705], 20.00th=[29230], 00:22:17.102 | 30.00th=[29754], 40.00th=[30016], 50.00th=[30540], 60.00th=[30802], 00:22:17.102 | 70.00th=[31851], 80.00th=[38536], 90.00th=[41157], 95.00th=[45351], 00:22:17.102 | 99.00th=[63177], 99.50th=[64750], 99.90th=[72877], 99.95th=[73925], 00:22:17.102 | 99.99th=[76022] 00:22:17.102 bw ( KiB/s): min=308224, max=547328, per=12.18%, avg=488140.80, stdev=68897.26, samples=20 00:22:17.102 iops : min= 1204, max= 2138, avg=1906.80, stdev=269.13, samples=20 00:22:17.102 lat (msec) : 20=0.38%, 50=97.19%, 100=2.43% 00:22:17.102 cpu : usr=0.46%, sys=5.15%, ctx=3984, majf=0, minf=4097 00:22:17.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:17.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.102 issued rwts: total=19131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.102 job2: (groupid=0, jobs=1): err= 0: pid=1408549: Wed Nov 20 16:12:46 2024 00:22:17.102 read: IOPS=1301, BW=325MiB/s (341MB/s)(3265MiB/10038msec) 00:22:17.102 slat (usec): min=11, max=16638, avg=757.11, stdev=1787.56 00:22:17.102 clat (usec): min=14873, max=91933, avg=48375.22, stdev=6383.28 00:22:17.102 lat (usec): min=15127, max=91950, avg=49132.33, stdev=6601.94 00:22:17.102 clat percentiles (usec): 00:22:17.102 | 1.00th=[41157], 5.00th=[42730], 10.00th=[43254], 20.00th=[45351], 00:22:17.102 | 30.00th=[45876], 40.00th=[46400], 50.00th=[46924], 60.00th=[47973], 00:22:17.102 | 70.00th=[48497], 80.00th=[49546], 90.00th=[53216], 95.00th=[62129], 00:22:17.102 | 99.00th=[78119], 99.50th=[79168], 99.90th=[84411], 99.95th=[85459], 00:22:17.102 | 99.99th=[88605] 00:22:17.102 bw ( KiB/s): min=224256, max=368128, per=8.31%, avg=332748.80, stdev=30177.11, samples=20 00:22:17.102 iops : min= 876, max= 1438, avg=1299.80, stdev=117.88, samples=20 00:22:17.102 lat (msec) : 20=0.13%, 50=82.19%, 100=17.68% 00:22:17.102 cpu : usr=0.27%, sys=4.19%, ctx=2630, majf=0, minf=4097 00:22:17.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:17.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.102 issued rwts: total=13061,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.102 job3: (groupid=0, jobs=1): err= 0: pid=1408550: Wed Nov 20 16:12:46 2024 00:22:17.102 read: IOPS=2097, BW=524MiB/s (550MB/s)(5262MiB/10034msec) 00:22:17.102 slat (usec): min=10, max=10518, avg=469.90, stdev=1055.19 00:22:17.102 clat (usec): min=2776, max=71792, avg=30000.78, stdev=7323.31 00:22:17.102 lat (usec): min=2816, max=71825, avg=30470.68, stdev=7465.98 00:22:17.102 clat percentiles (usec): 00:22:17.102 | 1.00th=[14091], 5.00th=[14877], 10.00th=[15795], 20.00th=[28705], 00:22:17.102 | 30.00th=[29230], 40.00th=[29754], 50.00th=[30016], 60.00th=[30540], 00:22:17.102 | 70.00th=[31065], 80.00th=[32375], 90.00th=[40109], 95.00th=[41681], 00:22:17.102 | 99.00th=[45351], 99.50th=[46400], 99.90th=[63177], 99.95th=[69731], 00:22:17.102 | 99.99th=[71828] 00:22:17.102 bw ( KiB/s): min=385024, max=1047599, per=13.41%, avg=537346.35, stdev=138656.95, samples=20 00:22:17.102 iops : min= 1504, max= 4092, avg=2099.00, stdev=541.59, samples=20 00:22:17.102 lat (msec) : 4=0.06%, 10=0.29%, 20=12.62%, 50=86.86%, 100=0.18% 00:22:17.102 cpu : usr=0.62%, sys=6.05%, ctx=4305, majf=0, minf=4097 00:22:17.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:22:17.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.102 issued rwts: total=21049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.102 job4: (groupid=0, jobs=1): err= 0: pid=1408551: Wed Nov 20 16:12:46 2024 00:22:17.102 read: IOPS=960, BW=240MiB/s (252MB/s)(2416MiB/10058msec) 00:22:17.102 slat (usec): min=11, max=21915, avg=1025.06, stdev=2764.33 00:22:17.102 clat (msec): min=11, max=131, avg=65.52, stdev= 9.31 00:22:17.102 lat (msec): min=12, max=131, avg=66.55, stdev= 9.72 00:22:17.102 clat percentiles (msec): 00:22:17.102 | 1.00th=[ 43], 5.00th=[ 55], 10.00th=[ 60], 20.00th=[ 61], 00:22:17.102 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 64], 60.00th=[ 65], 00:22:17.102 | 70.00th=[ 70], 80.00th=[ 72], 90.00th=[ 79], 95.00th=[ 81], 00:22:17.102 | 99.00th=[ 91], 99.50th=[ 96], 99.90th=[ 125], 99.95th=[ 128], 00:22:17.102 | 99.99th=[ 132] 00:22:17.102 bw ( KiB/s): min=195584, max=307200, per=6.13%, avg=245734.40, stdev=26663.08, samples=20 00:22:17.102 iops : min= 764, max= 1200, avg=959.90, stdev=104.15, samples=20 00:22:17.102 lat (msec) : 20=0.53%, 50=3.50%, 100=95.72%, 250=0.26% 00:22:17.102 cpu : usr=0.29%, sys=2.71%, ctx=2027, majf=0, minf=4097 00:22:17.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:17.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.102 issued rwts: total=9662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.102 job5: (groupid=0, jobs=1): err= 0: pid=1408553: Wed Nov 20 16:12:46 2024 00:22:17.102 read: IOPS=1331, BW=333MiB/s (349MB/s)(3349MiB/10059msec) 00:22:17.102 slat (usec): min=12, max=50676, avg=743.01, stdev=3254.76 00:22:17.102 clat (msec): min=12, max=131, avg=47.26, stdev=21.16 00:22:17.102 lat (msec): min=12, max=131, avg=48.00, stdev=21.69 00:22:17.102 clat percentiles (msec): 00:22:17.102 | 1.00th=[ 14], 5.00th=[ 15], 10.00th=[ 30], 20.00th=[ 31], 00:22:17.102 | 30.00th=[ 32], 40.00th=[ 33], 50.00th=[ 35], 60.00th=[ 62], 00:22:17.102 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 78], 95.00th=[ 80], 00:22:17.102 | 99.00th=[ 85], 99.50th=[ 97], 99.90th=[ 126], 99.95th=[ 128], 00:22:17.102 | 99.99th=[ 128] 00:22:17.102 bw ( KiB/s): min=196608, max=792064, per=8.52%, avg=341324.80, stdev=162324.13, samples=20 00:22:17.102 iops : min= 768, max= 3094, avg=1333.30, stdev=634.08, samples=20 00:22:17.102 lat (msec) : 20=8.23%, 50=47.92%, 100=43.43%, 250=0.42% 00:22:17.102 cpu : usr=0.37%, sys=4.59%, ctx=2537, majf=0, minf=4097 00:22:17.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:17.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.102 issued rwts: total=13396,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.102 job6: (groupid=0, jobs=1): err= 0: pid=1408554: Wed Nov 20 16:12:46 2024 00:22:17.102 read: IOPS=1316, BW=329MiB/s (345MB/s)(3303MiB/10039msec) 00:22:17.102 slat (usec): min=11, max=33162, avg=741.79, stdev=1871.44 00:22:17.102 clat (usec): min=14569, max=95712, avg=47815.31, stdev=5168.40 00:22:17.102 lat (usec): min=15238, max=95795, avg=48557.10, stdev=5445.14 00:22:17.102 clat percentiles (usec): 00:22:17.102 | 1.00th=[40633], 5.00th=[42206], 10.00th=[43254], 20.00th=[45351], 00:22:17.102 | 30.00th=[45876], 40.00th=[46400], 50.00th=[46924], 60.00th=[47449], 00:22:17.102 | 70.00th=[48497], 80.00th=[49021], 90.00th=[52167], 95.00th=[58983], 00:22:17.102 | 99.00th=[66323], 99.50th=[72877], 99.90th=[80217], 99.95th=[83362], 00:22:17.102 | 99.99th=[95945] 00:22:17.102 bw ( KiB/s): min=291328, max=369152, per=8.40%, avg=336614.40, stdev=20462.80, samples=20 00:22:17.102 iops : min= 1138, max= 1442, avg=1314.90, stdev=79.93, samples=20 00:22:17.102 lat (msec) : 20=0.14%, 50=83.93%, 100=15.93% 00:22:17.102 cpu : usr=0.42%, sys=4.12%, ctx=2746, majf=0, minf=4097 00:22:17.102 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:17.102 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.102 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.102 issued rwts: total=13212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.102 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.102 job7: (groupid=0, jobs=1): err= 0: pid=1408555: Wed Nov 20 16:12:46 2024 00:22:17.102 read: IOPS=2142, BW=536MiB/s (562MB/s)(5377MiB/10037msec) 00:22:17.103 slat (usec): min=10, max=19874, avg=455.83, stdev=1146.23 00:22:17.103 clat (usec): min=663, max=88794, avg=29375.50, stdev=11507.04 00:22:17.103 lat (usec): min=704, max=88840, avg=29831.33, stdev=11704.43 00:22:17.103 clat percentiles (usec): 00:22:17.103 | 1.00th=[13698], 5.00th=[14484], 10.00th=[14877], 20.00th=[15664], 00:22:17.103 | 30.00th=[28181], 40.00th=[29754], 50.00th=[30540], 60.00th=[31589], 00:22:17.103 | 70.00th=[32375], 80.00th=[37487], 90.00th=[41157], 95.00th=[45876], 00:22:17.103 | 99.00th=[70779], 99.50th=[78119], 99.90th=[82314], 99.95th=[85459], 00:22:17.103 | 99.99th=[88605] 00:22:17.103 bw ( KiB/s): min=249344, max=1057792, per=13.70%, avg=548966.40, stdev=203913.98, samples=20 00:22:17.103 iops : min= 974, max= 4132, avg=2144.40, stdev=796.54, samples=20 00:22:17.103 lat (usec) : 750=0.01% 00:22:17.103 lat (msec) : 2=0.14%, 4=0.07%, 10=0.45%, 20=27.62%, 50=68.36% 00:22:17.103 lat (msec) : 100=3.35% 00:22:17.103 cpu : usr=0.67%, sys=6.71%, ctx=4432, majf=0, minf=4097 00:22:17.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:22:17.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.103 issued rwts: total=21507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.103 job8: (groupid=0, jobs=1): err= 0: pid=1408556: Wed Nov 20 16:12:46 2024 00:22:17.103 read: IOPS=994, BW=249MiB/s (261MB/s)(2500MiB/10058msec) 00:22:17.103 slat (usec): min=11, max=19861, avg=997.10, stdev=2509.64 00:22:17.103 clat (msec): min=8, max=124, avg=63.29, stdev=12.12 00:22:17.103 lat (msec): min=8, max=124, avg=64.28, stdev=12.48 00:22:17.103 clat percentiles (msec): 00:22:17.103 | 1.00th=[ 29], 5.00th=[ 33], 10.00th=[ 48], 20.00th=[ 61], 00:22:17.103 | 30.00th=[ 62], 40.00th=[ 63], 50.00th=[ 63], 60.00th=[ 64], 00:22:17.103 | 70.00th=[ 69], 80.00th=[ 71], 90.00th=[ 79], 95.00th=[ 81], 00:22:17.103 | 99.00th=[ 86], 99.50th=[ 92], 99.90th=[ 122], 99.95th=[ 125], 00:22:17.103 | 99.99th=[ 125] 00:22:17.103 bw ( KiB/s): min=200704, max=442228, per=6.35%, avg=254457.00, stdev=50276.18, samples=20 00:22:17.103 iops : min= 784, max= 1727, avg=993.95, stdev=196.30, samples=20 00:22:17.103 lat (msec) : 10=0.07%, 20=0.51%, 50=10.36%, 100=88.81%, 250=0.25% 00:22:17.103 cpu : usr=0.29%, sys=3.25%, ctx=1986, majf=0, minf=4097 00:22:17.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:22:17.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.103 issued rwts: total=10001,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.103 job9: (groupid=0, jobs=1): err= 0: pid=1408557: Wed Nov 20 16:12:46 2024 00:22:17.103 read: IOPS=1353, BW=338MiB/s (355MB/s)(3402MiB/10056msec) 00:22:17.103 slat (usec): min=11, max=53015, avg=728.57, stdev=3171.62 00:22:17.103 clat (usec): min=958, max=153334, avg=46513.69, stdev=21471.13 00:22:17.103 lat (usec): min=998, max=153375, avg=47242.25, stdev=22003.17 00:22:17.103 clat percentiles (msec): 00:22:17.103 | 1.00th=[ 10], 5.00th=[ 15], 10.00th=[ 21], 20.00th=[ 30], 00:22:17.103 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 41], 60.00th=[ 61], 00:22:17.103 | 70.00th=[ 62], 80.00th=[ 70], 90.00th=[ 75], 95.00th=[ 80], 00:22:17.103 | 99.00th=[ 84], 99.50th=[ 96], 99.90th=[ 118], 99.95th=[ 129], 00:22:17.103 | 99.99th=[ 131] 00:22:17.103 bw ( KiB/s): min=192000, max=620032, per=8.65%, avg=346700.80, stdev=151495.30, samples=20 00:22:17.103 iops : min= 750, max= 2422, avg=1354.30, stdev=591.78, samples=20 00:22:17.103 lat (usec) : 1000=0.01% 00:22:17.103 lat (msec) : 2=0.01%, 4=0.14%, 10=0.86%, 20=8.87%, 50=43.86% 00:22:17.103 lat (msec) : 100=45.83%, 250=0.41% 00:22:17.103 cpu : usr=0.43%, sys=4.78%, ctx=2679, majf=0, minf=4097 00:22:17.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:17.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.103 issued rwts: total=13606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.103 job10: (groupid=0, jobs=1): err= 0: pid=1408558: Wed Nov 20 16:12:46 2024 00:22:17.103 read: IOPS=1308, BW=327MiB/s (343MB/s)(3284MiB/10041msec) 00:22:17.103 slat (usec): min=11, max=14140, avg=754.47, stdev=1755.82 00:22:17.103 clat (usec): min=14202, max=90217, avg=48104.01, stdev=6038.36 00:22:17.103 lat (usec): min=14473, max=91213, avg=48858.48, stdev=6244.11 00:22:17.103 clat percentiles (usec): 00:22:17.103 | 1.00th=[40633], 5.00th=[42730], 10.00th=[43254], 20.00th=[45351], 00:22:17.103 | 30.00th=[45876], 40.00th=[46400], 50.00th=[46924], 60.00th=[47449], 00:22:17.103 | 70.00th=[48497], 80.00th=[49546], 90.00th=[52691], 95.00th=[57934], 00:22:17.103 | 99.00th=[78119], 99.50th=[79168], 99.90th=[84411], 99.95th=[85459], 00:22:17.103 | 99.99th=[87557] 00:22:17.103 bw ( KiB/s): min=248817, max=366592, per=8.35%, avg=334719.25, stdev=25797.99, samples=20 00:22:17.103 iops : min= 971, max= 1432, avg=1307.45, stdev=100.94, samples=20 00:22:17.103 lat (msec) : 20=0.16%, 50=82.96%, 100=16.88% 00:22:17.103 cpu : usr=0.42%, sys=4.54%, ctx=2617, majf=0, minf=4097 00:22:17.103 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:17.103 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:17.103 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:17.103 issued rwts: total=13137,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:17.103 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:17.103 00:22:17.103 Run status group 0 (all jobs): 00:22:17.103 READ: bw=3913MiB/s (4103MB/s), 240MiB/s-536MiB/s (252MB/s-562MB/s), io=38.4GiB (41.3GB), run=10034-10059msec 00:22:17.103 00:22:17.103 Disk stats (read/write): 00:22:17.103 nvme0n1: ios=19019/0, merge=0/0, ticks=1219672/0, in_queue=1219672, util=96.79% 00:22:17.103 nvme10n1: ios=37786/0, merge=0/0, ticks=1215760/0, in_queue=1215760, util=97.02% 00:22:17.103 nvme1n1: ios=25671/0, merge=0/0, ticks=1217076/0, in_queue=1217076, util=97.35% 00:22:17.103 nvme2n1: ios=41632/0, merge=0/0, ticks=1214499/0, in_queue=1214499, util=97.54% 00:22:17.103 nvme3n1: ios=18988/0, merge=0/0, ticks=1215505/0, in_queue=1215505, util=97.65% 00:22:17.103 nvme4n1: ios=26446/0, merge=0/0, ticks=1217662/0, in_queue=1217662, util=98.08% 00:22:17.103 nvme5n1: ios=25970/0, merge=0/0, ticks=1217593/0, in_queue=1217593, util=98.23% 00:22:17.103 nvme6n1: ios=42527/0, merge=0/0, ticks=1218151/0, in_queue=1218151, util=98.41% 00:22:17.103 nvme7n1: ios=19679/0, merge=0/0, ticks=1218285/0, in_queue=1218285, util=98.89% 00:22:17.103 nvme8n1: ios=26919/0, merge=0/0, ticks=1219882/0, in_queue=1219882, util=99.15% 00:22:17.103 nvme9n1: ios=25814/0, merge=0/0, ticks=1217999/0, in_queue=1217999, util=99.30% 00:22:17.103 16:12:46 -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:22:17.103 [global] 00:22:17.103 thread=1 00:22:17.103 invalidate=1 00:22:17.103 rw=randwrite 00:22:17.103 time_based=1 00:22:17.103 runtime=10 00:22:17.103 ioengine=libaio 00:22:17.103 direct=1 00:22:17.103 bs=262144 00:22:17.103 iodepth=64 00:22:17.103 norandommap=1 00:22:17.103 numjobs=1 00:22:17.103 00:22:17.103 [job0] 00:22:17.103 filename=/dev/nvme0n1 00:22:17.103 [job1] 00:22:17.103 filename=/dev/nvme10n1 00:22:17.103 [job2] 00:22:17.103 filename=/dev/nvme1n1 00:22:17.103 [job3] 00:22:17.103 filename=/dev/nvme2n1 00:22:17.103 [job4] 00:22:17.103 filename=/dev/nvme3n1 00:22:17.103 [job5] 00:22:17.103 filename=/dev/nvme4n1 00:22:17.103 [job6] 00:22:17.103 filename=/dev/nvme5n1 00:22:17.103 [job7] 00:22:17.103 filename=/dev/nvme6n1 00:22:17.103 [job8] 00:22:17.103 filename=/dev/nvme7n1 00:22:17.103 [job9] 00:22:17.103 filename=/dev/nvme8n1 00:22:17.103 [job10] 00:22:17.103 filename=/dev/nvme9n1 00:22:17.103 Could not set queue depth (nvme0n1) 00:22:17.103 Could not set queue depth (nvme10n1) 00:22:17.103 Could not set queue depth (nvme1n1) 00:22:17.103 Could not set queue depth (nvme2n1) 00:22:17.103 Could not set queue depth (nvme3n1) 00:22:17.103 Could not set queue depth (nvme4n1) 00:22:17.103 Could not set queue depth (nvme5n1) 00:22:17.103 Could not set queue depth (nvme6n1) 00:22:17.103 Could not set queue depth (nvme7n1) 00:22:17.103 Could not set queue depth (nvme8n1) 00:22:17.103 Could not set queue depth (nvme9n1) 00:22:17.103 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.103 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:22:17.104 fio-3.35 00:22:17.104 Starting 11 threads 00:22:27.087 00:22:27.087 job0: (groupid=0, jobs=1): err= 0: pid=1410304: Wed Nov 20 16:12:57 2024 00:22:27.087 write: IOPS=895, BW=224MiB/s (235MB/s)(2248MiB/10044msec); 0 zone resets 00:22:27.087 slat (usec): min=27, max=24694, avg=1098.03, stdev=2190.41 00:22:27.087 clat (msec): min=2, max=127, avg=70.38, stdev=19.38 00:22:27.087 lat (msec): min=2, max=130, avg=71.48, stdev=19.69 00:22:27.087 clat percentiles (msec): 00:22:27.087 | 1.00th=[ 45], 5.00th=[ 53], 10.00th=[ 53], 20.00th=[ 55], 00:22:27.087 | 30.00th=[ 56], 40.00th=[ 57], 50.00th=[ 65], 60.00th=[ 70], 00:22:27.087 | 70.00th=[ 77], 80.00th=[ 92], 90.00th=[ 105], 95.00th=[ 107], 00:22:27.087 | 99.00th=[ 114], 99.50th=[ 117], 99.90th=[ 127], 99.95th=[ 127], 00:22:27.087 | 99.99th=[ 128] 00:22:27.087 bw ( KiB/s): min=147456, max=300032, per=6.52%, avg=228556.80, stdev=56839.74, samples=20 00:22:27.087 iops : min= 576, max= 1172, avg=892.80, stdev=222.03, samples=20 00:22:27.087 lat (msec) : 4=0.01%, 10=0.18%, 20=0.18%, 50=1.39%, 100=85.60% 00:22:27.087 lat (msec) : 250=12.65% 00:22:27.087 cpu : usr=2.09%, sys=4.20%, ctx=2240, majf=0, minf=1 00:22:27.087 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:27.087 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.087 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.087 issued rwts: total=0,8991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.087 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.087 job1: (groupid=0, jobs=1): err= 0: pid=1410316: Wed Nov 20 16:12:57 2024 00:22:27.087 write: IOPS=1736, BW=434MiB/s (455MB/s)(4362MiB/10045msec); 0 zone resets 00:22:27.087 slat (usec): min=17, max=14097, avg=565.56, stdev=1256.01 00:22:27.087 clat (usec): min=923, max=102032, avg=36271.48, stdev=23518.73 00:22:27.088 lat (usec): min=990, max=103976, avg=36837.05, stdev=23879.34 00:22:27.088 clat percentiles (msec): 00:22:27.088 | 1.00th=[ 10], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 18], 00:22:27.088 | 30.00th=[ 19], 40.00th=[ 19], 50.00th=[ 20], 60.00th=[ 35], 00:22:27.088 | 70.00th=[ 55], 80.00th=[ 57], 90.00th=[ 70], 95.00th=[ 85], 00:22:27.088 | 99.00th=[ 94], 99.50th=[ 96], 99.90th=[ 100], 99.95th=[ 101], 00:22:27.088 | 99.99th=[ 102] 00:22:27.088 bw ( KiB/s): min=174080, max=881152, per=12.71%, avg=445117.80, stdev=281411.22, samples=20 00:22:27.088 iops : min= 680, max= 3442, avg=1738.70, stdev=1099.20, samples=20 00:22:27.088 lat (usec) : 1000=0.01% 00:22:27.088 lat (msec) : 2=0.10%, 4=0.19%, 10=0.72%, 20=54.86%, 50=5.46% 00:22:27.088 lat (msec) : 100=38.65%, 250=0.02% 00:22:27.088 cpu : usr=3.05%, sys=5.07%, ctx=3862, majf=0, minf=1 00:22:27.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:27.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.088 issued rwts: total=0,17447,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.088 job2: (groupid=0, jobs=1): err= 0: pid=1410317: Wed Nov 20 16:12:57 2024 00:22:27.088 write: IOPS=1180, BW=295MiB/s (309MB/s)(2960MiB/10031msec); 0 zone resets 00:22:27.088 slat (usec): min=21, max=27642, avg=831.34, stdev=2026.51 00:22:27.088 clat (msec): min=14, max=130, avg=53.37, stdev=24.76 00:22:27.088 lat (msec): min=14, max=136, avg=54.20, stdev=25.20 00:22:27.088 clat percentiles (msec): 00:22:27.088 | 1.00th=[ 30], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 34], 00:22:27.088 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 38], 60.00th=[ 53], 00:22:27.088 | 70.00th=[ 70], 80.00th=[ 74], 90.00th=[ 99], 95.00th=[ 106], 00:22:27.088 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 122], 99.95th=[ 130], 00:22:27.088 | 99.99th=[ 131] 00:22:27.088 bw ( KiB/s): min=147968, max=512000, per=8.61%, avg=301538.00, stdev=128605.04, samples=20 00:22:27.088 iops : min= 578, max= 2000, avg=1177.85, stdev=502.39, samples=20 00:22:27.088 lat (msec) : 20=0.08%, 50=56.79%, 100=33.46%, 250=9.67% 00:22:27.088 cpu : usr=2.56%, sys=4.08%, ctx=2789, majf=0, minf=1 00:22:27.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:22:27.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.088 issued rwts: total=0,11841,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.088 job3: (groupid=0, jobs=1): err= 0: pid=1410319: Wed Nov 20 16:12:57 2024 00:22:27.088 write: IOPS=1492, BW=373MiB/s (391MB/s)(3748MiB/10043msec); 0 zone resets 00:22:27.088 slat (usec): min=18, max=13611, avg=654.71, stdev=1435.13 00:22:27.088 clat (msec): min=5, max=104, avg=42.21, stdev=22.20 00:22:27.088 lat (msec): min=5, max=105, avg=42.87, stdev=22.53 00:22:27.088 clat percentiles (msec): 00:22:27.088 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:22:27.088 | 30.00th=[ 20], 40.00th=[ 36], 50.00th=[ 40], 60.00th=[ 52], 00:22:27.088 | 70.00th=[ 55], 80.00th=[ 59], 90.00th=[ 72], 95.00th=[ 88], 00:22:27.088 | 99.00th=[ 95], 99.50th=[ 96], 99.90th=[ 99], 99.95th=[ 99], 00:22:27.088 | 99.99th=[ 104] 00:22:27.088 bw ( KiB/s): min=174080, max=878592, per=10.91%, avg=382131.20, stdev=226698.49, samples=20 00:22:27.088 iops : min= 680, max= 3432, avg=1492.70, stdev=885.54, samples=20 00:22:27.088 lat (msec) : 10=0.09%, 20=35.24%, 50=21.03%, 100=43.61%, 250=0.02% 00:22:27.088 cpu : usr=2.68%, sys=4.48%, ctx=3399, majf=0, minf=1 00:22:27.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:27.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.088 issued rwts: total=0,14990,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.088 job4: (groupid=0, jobs=1): err= 0: pid=1410320: Wed Nov 20 16:12:57 2024 00:22:27.088 write: IOPS=1537, BW=384MiB/s (403MB/s)(3863MiB/10047msec); 0 zone resets 00:22:27.088 slat (usec): min=17, max=35942, avg=624.65, stdev=1395.98 00:22:27.088 clat (msec): min=12, max=110, avg=40.98, stdev=21.71 00:22:27.088 lat (msec): min=12, max=110, avg=41.61, stdev=22.03 00:22:27.088 clat percentiles (msec): 00:22:27.088 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:22:27.088 | 30.00th=[ 19], 40.00th=[ 27], 50.00th=[ 40], 60.00th=[ 53], 00:22:27.088 | 70.00th=[ 55], 80.00th=[ 57], 90.00th=[ 72], 95.00th=[ 82], 00:22:27.088 | 99.00th=[ 91], 99.50th=[ 94], 99.90th=[ 97], 99.95th=[ 101], 00:22:27.088 | 99.99th=[ 110] 00:22:27.088 bw ( KiB/s): min=192512, max=892416, per=11.24%, avg=393907.20, stdev=221754.48, samples=20 00:22:27.088 iops : min= 752, max= 3486, avg=1538.70, stdev=866.23, samples=20 00:22:27.088 lat (msec) : 20=38.41%, 50=17.40%, 100=44.14%, 250=0.05% 00:22:27.088 cpu : usr=2.87%, sys=4.93%, ctx=3585, majf=0, minf=1 00:22:27.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:27.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.088 issued rwts: total=0,15450,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.088 job5: (groupid=0, jobs=1): err= 0: pid=1410324: Wed Nov 20 16:12:57 2024 00:22:27.088 write: IOPS=1332, BW=333MiB/s (349MB/s)(3343MiB/10031msec); 0 zone resets 00:22:27.088 slat (usec): min=15, max=26687, avg=727.69, stdev=1898.20 00:22:27.088 clat (usec): min=1165, max=129908, avg=47269.28, stdev=25388.84 00:22:27.088 lat (usec): min=1238, max=133576, avg=47996.97, stdev=25802.24 00:22:27.088 clat percentiles (msec): 00:22:27.088 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 26], 20.00th=[ 32], 00:22:27.088 | 30.00th=[ 34], 40.00th=[ 35], 50.00th=[ 36], 60.00th=[ 37], 00:22:27.088 | 70.00th=[ 53], 80.00th=[ 71], 90.00th=[ 91], 95.00th=[ 106], 00:22:27.088 | 99.00th=[ 112], 99.50th=[ 113], 99.90th=[ 123], 99.95th=[ 127], 00:22:27.088 | 99.99th=[ 130] 00:22:27.088 bw ( KiB/s): min=149504, max=635904, per=9.73%, avg=340711.00, stdev=160317.49, samples=20 00:22:27.088 iops : min= 584, max= 2484, avg=1330.90, stdev=626.24, samples=20 00:22:27.088 lat (msec) : 2=0.12%, 4=0.08%, 10=0.21%, 20=6.87%, 50=60.28% 00:22:27.088 lat (msec) : 100=23.72%, 250=8.72% 00:22:27.088 cpu : usr=2.59%, sys=3.64%, ctx=3135, majf=0, minf=1 00:22:27.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:22:27.088 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.088 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.088 issued rwts: total=0,13371,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.088 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.088 job6: (groupid=0, jobs=1): err= 0: pid=1410325: Wed Nov 20 16:12:57 2024 00:22:27.088 write: IOPS=1073, BW=268MiB/s (281MB/s)(2696MiB/10045msec); 0 zone resets 00:22:27.088 slat (usec): min=24, max=27118, avg=785.34, stdev=2074.68 00:22:27.088 clat (msec): min=2, max=129, avg=58.80, stdev=24.69 00:22:27.088 lat (msec): min=3, max=136, avg=59.59, stdev=25.19 00:22:27.088 clat percentiles (msec): 00:22:27.088 | 1.00th=[ 13], 5.00th=[ 23], 10.00th=[ 30], 20.00th=[ 37], 00:22:27.088 | 30.00th=[ 50], 40.00th=[ 53], 50.00th=[ 54], 60.00th=[ 56], 00:22:27.088 | 70.00th=[ 70], 80.00th=[ 84], 90.00th=[ 103], 95.00th=[ 107], 00:22:27.088 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 126], 99.95th=[ 129], 00:22:27.088 | 99.99th=[ 129] 00:22:27.088 bw ( KiB/s): min=146944, max=430080, per=7.84%, avg=274504.05, stdev=98144.72, samples=20 00:22:27.088 iops : min= 574, max= 1680, avg=1072.25, stdev=383.40, samples=20 00:22:27.088 lat (msec) : 4=0.02%, 10=0.50%, 20=3.21%, 50=26.93%, 100=58.71% 00:22:27.088 lat (msec) : 250=10.64% 00:22:27.088 cpu : usr=2.85%, sys=4.04%, ctx=3339, majf=0, minf=1 00:22:27.088 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:27.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.089 issued rwts: total=0,10785,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.089 job7: (groupid=0, jobs=1): err= 0: pid=1410326: Wed Nov 20 16:12:57 2024 00:22:27.089 write: IOPS=1096, BW=274MiB/s (287MB/s)(2752MiB/10040msec); 0 zone resets 00:22:27.089 slat (usec): min=22, max=13052, avg=889.47, stdev=1615.02 00:22:27.089 clat (usec): min=16378, max=98972, avg=57472.34, stdev=14138.04 00:22:27.089 lat (msec): min=16, max=103, avg=58.36, stdev=14.33 00:22:27.089 clat percentiles (usec): 00:22:27.089 | 1.00th=[34341], 5.00th=[35914], 10.00th=[36963], 20.00th=[51643], 00:22:27.089 | 30.00th=[53216], 40.00th=[54264], 50.00th=[55313], 60.00th=[56361], 00:22:27.089 | 70.00th=[57410], 80.00th=[66847], 90.00th=[74974], 95.00th=[89654], 00:22:27.089 | 99.00th=[95945], 99.50th=[96994], 99.90th=[98042], 99.95th=[99091], 00:22:27.089 | 99.99th=[99091] 00:22:27.089 bw ( KiB/s): min=173568, max=439808, per=8.00%, avg=280166.40, stdev=65125.15, samples=20 00:22:27.089 iops : min= 678, max= 1718, avg=1094.40, stdev=254.40, samples=20 00:22:27.089 lat (msec) : 20=0.15%, 50=15.80%, 100=84.05% 00:22:27.089 cpu : usr=2.28%, sys=4.29%, ctx=2757, majf=0, minf=1 00:22:27.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:22:27.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.089 issued rwts: total=0,11007,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.089 job8: (groupid=0, jobs=1): err= 0: pid=1410327: Wed Nov 20 16:12:57 2024 00:22:27.089 write: IOPS=915, BW=229MiB/s (240MB/s)(2299MiB/10046msec); 0 zone resets 00:22:27.089 slat (usec): min=19, max=41569, avg=1025.68, stdev=2532.03 00:22:27.089 clat (msec): min=3, max=147, avg=68.86, stdev=21.88 00:22:27.089 lat (msec): min=3, max=147, avg=69.89, stdev=22.27 00:22:27.089 clat percentiles (msec): 00:22:27.089 | 1.00th=[ 23], 5.00th=[ 36], 10.00th=[ 49], 20.00th=[ 53], 00:22:27.089 | 30.00th=[ 55], 40.00th=[ 57], 50.00th=[ 63], 60.00th=[ 73], 00:22:27.089 | 70.00th=[ 81], 80.00th=[ 91], 90.00th=[ 105], 95.00th=[ 107], 00:22:27.089 | 99.00th=[ 112], 99.50th=[ 115], 99.90th=[ 144], 99.95th=[ 144], 00:22:27.089 | 99.99th=[ 148] 00:22:27.089 bw ( KiB/s): min=146944, max=322048, per=6.67%, avg=233830.40, stdev=58510.15, samples=20 00:22:27.089 iops : min= 574, max= 1258, avg=913.40, stdev=228.56, samples=20 00:22:27.089 lat (msec) : 4=0.05%, 10=0.09%, 20=0.27%, 50=11.19%, 100=76.37% 00:22:27.089 lat (msec) : 250=12.03% 00:22:27.089 cpu : usr=2.03%, sys=3.59%, ctx=2413, majf=0, minf=1 00:22:27.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:22:27.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.089 issued rwts: total=0,9197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.089 job9: (groupid=0, jobs=1): err= 0: pid=1410328: Wed Nov 20 16:12:57 2024 00:22:27.089 write: IOPS=1526, BW=382MiB/s (400MB/s)(3833MiB/10042msec); 0 zone resets 00:22:27.089 slat (usec): min=17, max=59317, avg=642.42, stdev=1397.89 00:22:27.089 clat (msec): min=8, max=129, avg=41.26, stdev=21.07 00:22:27.089 lat (msec): min=8, max=129, avg=41.91, stdev=21.38 00:22:27.089 clat percentiles (msec): 00:22:27.089 | 1.00th=[ 17], 5.00th=[ 18], 10.00th=[ 18], 20.00th=[ 19], 00:22:27.089 | 30.00th=[ 20], 40.00th=[ 35], 50.00th=[ 38], 60.00th=[ 53], 00:22:27.089 | 70.00th=[ 55], 80.00th=[ 57], 90.00th=[ 72], 95.00th=[ 81], 00:22:27.089 | 99.00th=[ 92], 99.50th=[ 94], 99.90th=[ 105], 99.95th=[ 113], 00:22:27.089 | 99.99th=[ 130] 00:22:27.089 bw ( KiB/s): min=193024, max=899072, per=11.16%, avg=390914.15, stdev=216411.67, samples=20 00:22:27.089 iops : min= 754, max= 3512, avg=1527.00, stdev=845.36, samples=20 00:22:27.089 lat (msec) : 10=0.03%, 20=32.40%, 50=23.53%, 100=43.90%, 250=0.15% 00:22:27.089 cpu : usr=2.91%, sys=4.94%, ctx=3542, majf=0, minf=1 00:22:27.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.6% 00:22:27.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.089 issued rwts: total=0,15332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.089 job10: (groupid=0, jobs=1): err= 0: pid=1410329: Wed Nov 20 16:12:57 2024 00:22:27.089 write: IOPS=903, BW=226MiB/s (237MB/s)(2269MiB/10042msec); 0 zone resets 00:22:27.089 slat (usec): min=23, max=24431, avg=1096.58, stdev=2274.06 00:22:27.089 clat (msec): min=22, max=129, avg=69.69, stdev=20.81 00:22:27.089 lat (msec): min=22, max=131, avg=70.79, stdev=21.17 00:22:27.089 clat percentiles (msec): 00:22:27.089 | 1.00th=[ 35], 5.00th=[ 37], 10.00th=[ 39], 20.00th=[ 54], 00:22:27.089 | 30.00th=[ 57], 40.00th=[ 67], 50.00th=[ 71], 60.00th=[ 73], 00:22:27.089 | 70.00th=[ 78], 80.00th=[ 87], 90.00th=[ 104], 95.00th=[ 107], 00:22:27.089 | 99.00th=[ 113], 99.50th=[ 116], 99.90th=[ 128], 99.95th=[ 129], 00:22:27.089 | 99.99th=[ 130] 00:22:27.089 bw ( KiB/s): min=149504, max=428032, per=6.59%, avg=230753.85, stdev=69811.68, samples=20 00:22:27.089 iops : min= 584, max= 1672, avg=901.35, stdev=272.71, samples=20 00:22:27.089 lat (msec) : 50=15.97%, 100=71.36%, 250=12.67% 00:22:27.089 cpu : usr=1.99%, sys=3.97%, ctx=2233, majf=0, minf=1 00:22:27.089 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:22:27.089 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:27.089 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:22:27.089 issued rwts: total=0,9076,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:27.089 latency : target=0, window=0, percentile=100.00%, depth=64 00:22:27.089 00:22:27.089 Run status group 0 (all jobs): 00:22:27.089 WRITE: bw=3421MiB/s (3587MB/s), 224MiB/s-434MiB/s (235MB/s-455MB/s), io=33.6GiB (36.0GB), run=10031-10047msec 00:22:27.089 00:22:27.089 Disk stats (read/write): 00:22:27.089 nvme0n1: ios=49/17565, merge=0/0, ticks=17/1214930, in_queue=1214947, util=96.65% 00:22:27.089 nvme10n1: ios=0/34461, merge=0/0, ticks=0/1217804, in_queue=1217804, util=96.80% 00:22:27.089 nvme1n1: ios=0/23093, merge=0/0, ticks=0/1217684, in_queue=1217684, util=97.15% 00:22:27.089 nvme2n1: ios=0/29564, merge=0/0, ticks=0/1219582, in_queue=1219582, util=97.36% 00:22:27.089 nvme3n1: ios=0/30468, merge=0/0, ticks=0/1219451, in_queue=1219451, util=97.44% 00:22:27.089 nvme4n1: ios=0/26128, merge=0/0, ticks=0/1217837, in_queue=1217837, util=97.83% 00:22:27.089 nvme5n1: ios=0/21141, merge=0/0, ticks=0/1220025, in_queue=1220025, util=98.01% 00:22:27.089 nvme6n1: ios=0/21594, merge=0/0, ticks=0/1216524, in_queue=1216524, util=98.14% 00:22:27.089 nvme7n1: ios=0/17975, merge=0/0, ticks=0/1216026, in_queue=1216026, util=98.66% 00:22:27.089 nvme8n1: ios=0/30247, merge=0/0, ticks=0/1218683, in_queue=1218683, util=98.83% 00:22:27.089 nvme9n1: ios=0/17733, merge=0/0, ticks=0/1214769, in_queue=1214769, util=98.98% 00:22:27.089 16:12:57 -- target/multiconnection.sh@36 -- # sync 00:22:27.089 16:12:57 -- target/multiconnection.sh@37 -- # seq 1 11 00:22:27.089 16:12:57 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.089 16:12:57 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:22:27.660 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:22:27.660 16:12:58 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:22:27.660 16:12:58 -- common/autotest_common.sh@1208 -- # local i=0 00:22:27.660 16:12:58 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:27.660 16:12:58 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK1 00:22:27.660 16:12:58 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:27.660 16:12:58 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK1 00:22:27.660 16:12:58 -- common/autotest_common.sh@1220 -- # return 0 00:22:27.660 16:12:58 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.660 16:12:58 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:27.660 16:12:58 -- common/autotest_common.sh@10 -- # set +x 00:22:27.660 16:12:58 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:27.660 16:12:58 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:27.660 16:12:58 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:22:28.599 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:22:28.599 16:12:59 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:22:28.599 16:12:59 -- common/autotest_common.sh@1208 -- # local i=0 00:22:28.599 16:12:59 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:28.599 16:12:59 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK2 00:22:28.599 16:12:59 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:28.599 16:12:59 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK2 00:22:28.599 16:12:59 -- common/autotest_common.sh@1220 -- # return 0 00:22:28.599 16:12:59 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:22:28.599 16:12:59 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:28.599 16:12:59 -- common/autotest_common.sh@10 -- # set +x 00:22:28.599 16:12:59 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:28.599 16:12:59 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:28.599 16:12:59 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:22:29.536 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:22:29.536 16:13:00 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:22:29.536 16:13:00 -- common/autotest_common.sh@1208 -- # local i=0 00:22:29.536 16:13:00 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:29.536 16:13:00 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK3 00:22:29.536 16:13:00 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:29.536 16:13:00 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK3 00:22:29.536 16:13:00 -- common/autotest_common.sh@1220 -- # return 0 00:22:29.536 16:13:00 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:22:29.536 16:13:00 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:29.536 16:13:00 -- common/autotest_common.sh@10 -- # set +x 00:22:29.536 16:13:00 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:29.536 16:13:00 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:29.536 16:13:00 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:22:30.472 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:22:30.472 16:13:01 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:22:30.472 16:13:01 -- common/autotest_common.sh@1208 -- # local i=0 00:22:30.472 16:13:01 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:30.472 16:13:01 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK4 00:22:30.731 16:13:01 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:30.731 16:13:01 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK4 00:22:30.731 16:13:01 -- common/autotest_common.sh@1220 -- # return 0 00:22:30.731 16:13:01 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:22:30.731 16:13:01 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:30.731 16:13:01 -- common/autotest_common.sh@10 -- # set +x 00:22:30.731 16:13:01 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:30.731 16:13:01 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:30.731 16:13:01 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:22:31.669 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:22:31.669 16:13:02 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:22:31.669 16:13:02 -- common/autotest_common.sh@1208 -- # local i=0 00:22:31.669 16:13:02 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:31.669 16:13:02 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK5 00:22:31.669 16:13:02 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:31.669 16:13:02 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK5 00:22:31.669 16:13:02 -- common/autotest_common.sh@1220 -- # return 0 00:22:31.669 16:13:02 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:22:31.669 16:13:02 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:31.669 16:13:02 -- common/autotest_common.sh@10 -- # set +x 00:22:31.669 16:13:02 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:31.669 16:13:02 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:31.669 16:13:02 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:22:32.607 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:22:32.607 16:13:03 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:22:32.607 16:13:03 -- common/autotest_common.sh@1208 -- # local i=0 00:22:32.607 16:13:03 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:32.607 16:13:03 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK6 00:22:32.607 16:13:03 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:32.607 16:13:03 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK6 00:22:32.607 16:13:03 -- common/autotest_common.sh@1220 -- # return 0 00:22:32.607 16:13:03 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:22:32.607 16:13:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:32.607 16:13:03 -- common/autotest_common.sh@10 -- # set +x 00:22:32.607 16:13:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:32.607 16:13:03 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:32.607 16:13:03 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:22:33.546 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:22:33.546 16:13:04 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:22:33.546 16:13:04 -- common/autotest_common.sh@1208 -- # local i=0 00:22:33.546 16:13:04 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:33.546 16:13:04 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK7 00:22:33.546 16:13:04 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK7 00:22:33.546 16:13:04 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:33.546 16:13:04 -- common/autotest_common.sh@1220 -- # return 0 00:22:33.546 16:13:04 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:22:33.546 16:13:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:33.546 16:13:04 -- common/autotest_common.sh@10 -- # set +x 00:22:33.546 16:13:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:33.546 16:13:04 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:33.546 16:13:04 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:22:34.485 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:22:34.485 16:13:05 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:22:34.485 16:13:05 -- common/autotest_common.sh@1208 -- # local i=0 00:22:34.485 16:13:05 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:34.485 16:13:05 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK8 00:22:34.745 16:13:05 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:34.745 16:13:05 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK8 00:22:34.745 16:13:05 -- common/autotest_common.sh@1220 -- # return 0 00:22:34.745 16:13:05 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:22:34.745 16:13:05 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:34.745 16:13:05 -- common/autotest_common.sh@10 -- # set +x 00:22:34.745 16:13:05 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:34.745 16:13:05 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:34.745 16:13:05 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:22:35.682 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:22:35.682 16:13:06 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:22:35.682 16:13:06 -- common/autotest_common.sh@1208 -- # local i=0 00:22:35.682 16:13:06 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:35.682 16:13:06 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK9 00:22:35.682 16:13:06 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:35.682 16:13:06 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK9 00:22:35.682 16:13:06 -- common/autotest_common.sh@1220 -- # return 0 00:22:35.682 16:13:06 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:22:35.682 16:13:06 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:35.682 16:13:06 -- common/autotest_common.sh@10 -- # set +x 00:22:35.682 16:13:06 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:35.682 16:13:06 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:35.682 16:13:06 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:22:36.621 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:22:36.621 16:13:07 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:22:36.621 16:13:07 -- common/autotest_common.sh@1208 -- # local i=0 00:22:36.621 16:13:07 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:36.621 16:13:07 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK10 00:22:36.621 16:13:07 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:36.621 16:13:07 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK10 00:22:36.621 16:13:07 -- common/autotest_common.sh@1220 -- # return 0 00:22:36.621 16:13:07 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:22:36.621 16:13:07 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:36.621 16:13:07 -- common/autotest_common.sh@10 -- # set +x 00:22:36.621 16:13:07 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:36.621 16:13:07 -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:22:36.621 16:13:07 -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:22:37.560 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:22:37.560 16:13:08 -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:22:37.560 16:13:08 -- common/autotest_common.sh@1208 -- # local i=0 00:22:37.560 16:13:08 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:22:37.560 16:13:08 -- common/autotest_common.sh@1209 -- # grep -q -w SPDK11 00:22:37.560 16:13:08 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:22:37.560 16:13:08 -- common/autotest_common.sh@1216 -- # grep -q -w SPDK11 00:22:37.560 16:13:08 -- common/autotest_common.sh@1220 -- # return 0 00:22:37.560 16:13:08 -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:22:37.560 16:13:08 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:37.560 16:13:08 -- common/autotest_common.sh@10 -- # set +x 00:22:37.560 16:13:08 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:37.560 16:13:08 -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:22:37.560 16:13:08 -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:22:37.560 16:13:08 -- target/multiconnection.sh@47 -- # nvmftestfini 00:22:37.560 16:13:08 -- nvmf/common.sh@476 -- # nvmfcleanup 00:22:37.560 16:13:08 -- nvmf/common.sh@116 -- # sync 00:22:37.560 16:13:08 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:22:37.560 16:13:08 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:22:37.560 16:13:08 -- nvmf/common.sh@119 -- # set +e 00:22:37.560 16:13:08 -- nvmf/common.sh@120 -- # for i in {1..20} 00:22:37.560 16:13:08 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:22:37.560 rmmod nvme_rdma 00:22:37.560 rmmod nvme_fabrics 00:22:37.819 16:13:08 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:22:37.819 16:13:08 -- nvmf/common.sh@123 -- # set -e 00:22:37.819 16:13:08 -- nvmf/common.sh@124 -- # return 0 00:22:37.819 16:13:08 -- nvmf/common.sh@477 -- # '[' -n 1401515 ']' 00:22:37.819 16:13:08 -- nvmf/common.sh@478 -- # killprocess 1401515 00:22:37.819 16:13:08 -- common/autotest_common.sh@936 -- # '[' -z 1401515 ']' 00:22:37.819 16:13:08 -- common/autotest_common.sh@940 -- # kill -0 1401515 00:22:37.819 16:13:08 -- common/autotest_common.sh@941 -- # uname 00:22:37.819 16:13:08 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:37.819 16:13:08 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1401515 00:22:37.819 16:13:08 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:37.819 16:13:08 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:37.819 16:13:08 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1401515' 00:22:37.819 killing process with pid 1401515 00:22:37.819 16:13:08 -- common/autotest_common.sh@955 -- # kill 1401515 00:22:37.819 16:13:08 -- common/autotest_common.sh@960 -- # wait 1401515 00:22:38.389 16:13:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:22:38.389 16:13:08 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:22:38.389 00:22:38.389 real 1m15.465s 00:22:38.389 user 4m54.745s 00:22:38.389 sys 0m18.724s 00:22:38.389 16:13:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:22:38.389 16:13:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.389 ************************************ 00:22:38.389 END TEST nvmf_multiconnection 00:22:38.389 ************************************ 00:22:38.389 16:13:08 -- nvmf/nvmf.sh@66 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:38.389 16:13:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:38.389 16:13:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:38.389 16:13:08 -- common/autotest_common.sh@10 -- # set +x 00:22:38.389 ************************************ 00:22:38.389 START TEST nvmf_initiator_timeout 00:22:38.389 ************************************ 00:22:38.389 16:13:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=rdma 00:22:38.389 * Looking for test storage... 00:22:38.389 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:22:38.389 16:13:09 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:22:38.389 16:13:09 -- common/autotest_common.sh@1690 -- # lcov --version 00:22:38.389 16:13:09 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:22:38.389 16:13:09 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:22:38.389 16:13:09 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:22:38.389 16:13:09 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:22:38.390 16:13:09 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:22:38.390 16:13:09 -- scripts/common.sh@335 -- # IFS=.-: 00:22:38.390 16:13:09 -- scripts/common.sh@335 -- # read -ra ver1 00:22:38.390 16:13:09 -- scripts/common.sh@336 -- # IFS=.-: 00:22:38.390 16:13:09 -- scripts/common.sh@336 -- # read -ra ver2 00:22:38.390 16:13:09 -- scripts/common.sh@337 -- # local 'op=<' 00:22:38.390 16:13:09 -- scripts/common.sh@339 -- # ver1_l=2 00:22:38.390 16:13:09 -- scripts/common.sh@340 -- # ver2_l=1 00:22:38.390 16:13:09 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:22:38.390 16:13:09 -- scripts/common.sh@343 -- # case "$op" in 00:22:38.390 16:13:09 -- scripts/common.sh@344 -- # : 1 00:22:38.390 16:13:09 -- scripts/common.sh@363 -- # (( v = 0 )) 00:22:38.390 16:13:09 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:38.390 16:13:09 -- scripts/common.sh@364 -- # decimal 1 00:22:38.390 16:13:09 -- scripts/common.sh@352 -- # local d=1 00:22:38.390 16:13:09 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:38.390 16:13:09 -- scripts/common.sh@354 -- # echo 1 00:22:38.390 16:13:09 -- scripts/common.sh@364 -- # ver1[v]=1 00:22:38.390 16:13:09 -- scripts/common.sh@365 -- # decimal 2 00:22:38.390 16:13:09 -- scripts/common.sh@352 -- # local d=2 00:22:38.390 16:13:09 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:38.390 16:13:09 -- scripts/common.sh@354 -- # echo 2 00:22:38.390 16:13:09 -- scripts/common.sh@365 -- # ver2[v]=2 00:22:38.390 16:13:09 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:22:38.390 16:13:09 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:22:38.390 16:13:09 -- scripts/common.sh@367 -- # return 0 00:22:38.390 16:13:09 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:38.390 16:13:09 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:22:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.390 --rc genhtml_branch_coverage=1 00:22:38.390 --rc genhtml_function_coverage=1 00:22:38.390 --rc genhtml_legend=1 00:22:38.390 --rc geninfo_all_blocks=1 00:22:38.390 --rc geninfo_unexecuted_blocks=1 00:22:38.390 00:22:38.390 ' 00:22:38.390 16:13:09 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:22:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.390 --rc genhtml_branch_coverage=1 00:22:38.390 --rc genhtml_function_coverage=1 00:22:38.390 --rc genhtml_legend=1 00:22:38.390 --rc geninfo_all_blocks=1 00:22:38.390 --rc geninfo_unexecuted_blocks=1 00:22:38.390 00:22:38.390 ' 00:22:38.390 16:13:09 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:22:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.390 --rc genhtml_branch_coverage=1 00:22:38.390 --rc genhtml_function_coverage=1 00:22:38.390 --rc genhtml_legend=1 00:22:38.390 --rc geninfo_all_blocks=1 00:22:38.390 --rc geninfo_unexecuted_blocks=1 00:22:38.390 00:22:38.390 ' 00:22:38.390 16:13:09 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:22:38.390 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:38.390 --rc genhtml_branch_coverage=1 00:22:38.390 --rc genhtml_function_coverage=1 00:22:38.390 --rc genhtml_legend=1 00:22:38.390 --rc geninfo_all_blocks=1 00:22:38.390 --rc geninfo_unexecuted_blocks=1 00:22:38.390 00:22:38.390 ' 00:22:38.390 16:13:09 -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.390 16:13:09 -- nvmf/common.sh@7 -- # uname -s 00:22:38.390 16:13:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.390 16:13:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.390 16:13:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.390 16:13:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.390 16:13:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.390 16:13:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.390 16:13:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.390 16:13:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.390 16:13:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.390 16:13:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.390 16:13:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:22:38.390 16:13:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:22:38.390 16:13:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.390 16:13:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.390 16:13:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.390 16:13:09 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:22:38.390 16:13:09 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.390 16:13:09 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.390 16:13:09 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.390 16:13:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.390 16:13:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.390 16:13:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.390 16:13:09 -- paths/export.sh@5 -- # export PATH 00:22:38.390 16:13:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.390 16:13:09 -- nvmf/common.sh@46 -- # : 0 00:22:38.390 16:13:09 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:22:38.390 16:13:09 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:22:38.390 16:13:09 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:22:38.390 16:13:09 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.390 16:13:09 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.390 16:13:09 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:22:38.390 16:13:09 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:22:38.390 16:13:09 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:22:38.390 16:13:09 -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:38.390 16:13:09 -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:38.390 16:13:09 -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:22:38.390 16:13:09 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:22:38.390 16:13:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.390 16:13:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:22:38.390 16:13:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:22:38.390 16:13:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:22:38.390 16:13:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.390 16:13:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:38.390 16:13:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.390 16:13:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:22:38.390 16:13:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:22:38.390 16:13:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:22:38.390 16:13:09 -- common/autotest_common.sh@10 -- # set +x 00:22:45.037 16:13:15 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:22:45.037 16:13:15 -- nvmf/common.sh@290 -- # pci_devs=() 00:22:45.037 16:13:15 -- nvmf/common.sh@290 -- # local -a pci_devs 00:22:45.037 16:13:15 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:22:45.037 16:13:15 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:22:45.037 16:13:15 -- nvmf/common.sh@292 -- # pci_drivers=() 00:22:45.037 16:13:15 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:22:45.037 16:13:15 -- nvmf/common.sh@294 -- # net_devs=() 00:22:45.037 16:13:15 -- nvmf/common.sh@294 -- # local -ga net_devs 00:22:45.037 16:13:15 -- nvmf/common.sh@295 -- # e810=() 00:22:45.037 16:13:15 -- nvmf/common.sh@295 -- # local -ga e810 00:22:45.037 16:13:15 -- nvmf/common.sh@296 -- # x722=() 00:22:45.037 16:13:15 -- nvmf/common.sh@296 -- # local -ga x722 00:22:45.037 16:13:15 -- nvmf/common.sh@297 -- # mlx=() 00:22:45.037 16:13:15 -- nvmf/common.sh@297 -- # local -ga mlx 00:22:45.037 16:13:15 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:45.037 16:13:15 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:45.037 16:13:15 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:45.037 16:13:15 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:45.037 16:13:15 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:45.037 16:13:15 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:45.037 16:13:15 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:45.037 16:13:15 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:45.037 16:13:15 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:45.037 16:13:15 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:45.038 16:13:15 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:45.038 16:13:15 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:22:45.038 16:13:15 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:22:45.038 16:13:15 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:22:45.038 16:13:15 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:22:45.038 16:13:15 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:22:45.038 16:13:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:22:45.038 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:22:45.038 16:13:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:45.038 16:13:15 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:22:45.038 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:22:45.038 16:13:15 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:22:45.038 16:13:15 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:22:45.038 16:13:15 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.038 16:13:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:45.038 16:13:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.038 16:13:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:22:45.038 Found net devices under 0000:d9:00.0: mlx_0_0 00:22:45.038 16:13:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.038 16:13:15 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:45.038 16:13:15 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:22:45.038 16:13:15 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:45.038 16:13:15 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:22:45.038 Found net devices under 0000:d9:00.1: mlx_0_1 00:22:45.038 16:13:15 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:22:45.038 16:13:15 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:22:45.038 16:13:15 -- nvmf/common.sh@402 -- # is_hw=yes 00:22:45.038 16:13:15 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@408 -- # rdma_device_init 00:22:45.038 16:13:15 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:22:45.038 16:13:15 -- nvmf/common.sh@57 -- # uname 00:22:45.038 16:13:15 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:22:45.038 16:13:15 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:22:45.038 16:13:15 -- nvmf/common.sh@62 -- # modprobe ib_core 00:22:45.038 16:13:15 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:22:45.038 16:13:15 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:22:45.038 16:13:15 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:22:45.038 16:13:15 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:22:45.038 16:13:15 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:22:45.038 16:13:15 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:22:45.038 16:13:15 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:22:45.038 16:13:15 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:22:45.038 16:13:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:45.038 16:13:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:45.038 16:13:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:45.038 16:13:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:45.038 16:13:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:45.038 16:13:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:45.038 16:13:15 -- nvmf/common.sh@104 -- # continue 2 00:22:45.038 16:13:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:45.038 16:13:15 -- nvmf/common.sh@104 -- # continue 2 00:22:45.038 16:13:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:45.038 16:13:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:22:45.038 16:13:15 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:45.038 16:13:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:45.038 16:13:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:45.038 16:13:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:45.038 16:13:15 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:22:45.038 16:13:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:22:45.038 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:45.038 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:22:45.038 altname enp217s0f0np0 00:22:45.038 altname ens818f0np0 00:22:45.038 inet 192.168.100.8/24 scope global mlx_0_0 00:22:45.038 valid_lft forever preferred_lft forever 00:22:45.038 16:13:15 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:22:45.038 16:13:15 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:22:45.038 16:13:15 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:45.038 16:13:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:45.038 16:13:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:45.038 16:13:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:45.038 16:13:15 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:22:45.038 16:13:15 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:22:45.038 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:22:45.038 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:22:45.038 altname enp217s0f1np1 00:22:45.038 altname ens818f1np1 00:22:45.038 inet 192.168.100.9/24 scope global mlx_0_1 00:22:45.038 valid_lft forever preferred_lft forever 00:22:45.038 16:13:15 -- nvmf/common.sh@410 -- # return 0 00:22:45.038 16:13:15 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:22:45.038 16:13:15 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:22:45.038 16:13:15 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:22:45.038 16:13:15 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:22:45.038 16:13:15 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:22:45.038 16:13:15 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:22:45.038 16:13:15 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:22:45.038 16:13:15 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:22:45.038 16:13:15 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:22:45.038 16:13:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.038 16:13:15 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:22:45.038 16:13:15 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:22:45.038 16:13:15 -- nvmf/common.sh@104 -- # continue 2 00:22:45.039 16:13:15 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:22:45.039 16:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.039 16:13:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:22:45.039 16:13:15 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:22:45.039 16:13:15 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:22:45.039 16:13:15 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:22:45.039 16:13:15 -- nvmf/common.sh@104 -- # continue 2 00:22:45.039 16:13:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:45.039 16:13:15 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:22:45.039 16:13:15 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:22:45.039 16:13:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:22:45.039 16:13:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:45.039 16:13:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:45.039 16:13:15 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:22:45.039 16:13:15 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:22:45.039 16:13:15 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:22:45.039 16:13:15 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:22:45.039 16:13:15 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:22:45.039 16:13:15 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:22:45.039 16:13:15 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:22:45.039 192.168.100.9' 00:22:45.039 16:13:15 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:22:45.039 192.168.100.9' 00:22:45.039 16:13:15 -- nvmf/common.sh@445 -- # head -n 1 00:22:45.039 16:13:15 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:22:45.039 16:13:15 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:22:45.039 192.168.100.9' 00:22:45.039 16:13:15 -- nvmf/common.sh@446 -- # tail -n +2 00:22:45.039 16:13:15 -- nvmf/common.sh@446 -- # head -n 1 00:22:45.039 16:13:15 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:22:45.039 16:13:15 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:22:45.039 16:13:15 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:22:45.039 16:13:15 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:22:45.039 16:13:15 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:22:45.039 16:13:15 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:22:45.039 16:13:15 -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:22:45.039 16:13:15 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:22:45.039 16:13:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:45.039 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.039 16:13:15 -- nvmf/common.sh@469 -- # nvmfpid=1417127 00:22:45.039 16:13:15 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:45.039 16:13:15 -- nvmf/common.sh@470 -- # waitforlisten 1417127 00:22:45.039 16:13:15 -- common/autotest_common.sh@829 -- # '[' -z 1417127 ']' 00:22:45.039 16:13:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.039 16:13:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:45.039 16:13:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.039 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.039 16:13:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:45.039 16:13:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.039 [2024-11-20 16:13:15.718877] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:22:45.039 [2024-11-20 16:13:15.718936] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.039 EAL: No free 2048 kB hugepages reported on node 1 00:22:45.039 [2024-11-20 16:13:15.790086] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:45.039 [2024-11-20 16:13:15.828557] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:22:45.039 [2024-11-20 16:13:15.828672] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.039 [2024-11-20 16:13:15.828682] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.039 [2024-11-20 16:13:15.828690] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.039 [2024-11-20 16:13:15.828814] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:45.039 [2024-11-20 16:13:15.828925] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:45.039 [2024-11-20 16:13:15.829022] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:45.039 [2024-11-20 16:13:15.829028] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:45.978 16:13:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:45.978 16:13:16 -- common/autotest_common.sh@862 -- # return 0 00:22:45.978 16:13:16 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:22:45.978 16:13:16 -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:45.978 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.978 16:13:16 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:45.978 16:13:16 -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:22:45.978 16:13:16 -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:45.978 16:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.978 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.978 Malloc0 00:22:45.978 16:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.978 16:13:16 -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:22:45.978 16:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.978 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.978 Delay0 00:22:45.978 16:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.978 16:13:16 -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:22:45.978 16:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.978 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.978 [2024-11-20 16:13:16.642623] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a2f5b0/0x1a39980) succeed. 00:22:45.978 [2024-11-20 16:13:16.651985] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a30b50/0x1a7b020) succeed. 00:22:45.978 16:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.978 16:13:16 -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:22:45.978 16:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.978 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:45.978 16:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:45.978 16:13:16 -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:22:45.978 16:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:45.978 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.238 16:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.238 16:13:16 -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:22:46.238 16:13:16 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.238 16:13:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.238 [2024-11-20 16:13:16.794719] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:22:46.238 16:13:16 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.238 16:13:16 -- target/initiator_timeout.sh@29 -- # nvme connect -i 15 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e --hostid=8013ee90-59d8-e711-906e-00163566263e -t rdma -n nqn.2016-06.io.spdk:cnode1 -a 192.168.100.8 -s 4420 00:22:47.176 16:13:17 -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:22:47.176 16:13:17 -- common/autotest_common.sh@1187 -- # local i=0 00:22:47.176 16:13:17 -- common/autotest_common.sh@1188 -- # local nvme_device_counter=1 nvme_devices=0 00:22:47.176 16:13:17 -- common/autotest_common.sh@1189 -- # [[ -n '' ]] 00:22:47.176 16:13:17 -- common/autotest_common.sh@1194 -- # sleep 2 00:22:49.087 16:13:19 -- common/autotest_common.sh@1195 -- # (( i++ <= 15 )) 00:22:49.087 16:13:19 -- common/autotest_common.sh@1196 -- # lsblk -l -o NAME,SERIAL 00:22:49.087 16:13:19 -- common/autotest_common.sh@1196 -- # grep -c SPDKISFASTANDAWESOME 00:22:49.087 16:13:19 -- common/autotest_common.sh@1196 -- # nvme_devices=1 00:22:49.087 16:13:19 -- common/autotest_common.sh@1197 -- # (( nvme_devices == nvme_device_counter )) 00:22:49.087 16:13:19 -- common/autotest_common.sh@1197 -- # return 0 00:22:49.087 16:13:19 -- target/initiator_timeout.sh@35 -- # fio_pid=1417941 00:22:49.087 16:13:19 -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:22:49.087 16:13:19 -- target/initiator_timeout.sh@37 -- # sleep 3 00:22:49.087 [global] 00:22:49.087 thread=1 00:22:49.087 invalidate=1 00:22:49.087 rw=write 00:22:49.087 time_based=1 00:22:49.087 runtime=60 00:22:49.087 ioengine=libaio 00:22:49.087 direct=1 00:22:49.087 bs=4096 00:22:49.087 iodepth=1 00:22:49.087 norandommap=0 00:22:49.087 numjobs=1 00:22:49.087 00:22:49.087 verify_dump=1 00:22:49.087 verify_backlog=512 00:22:49.087 verify_state_save=0 00:22:49.087 do_verify=1 00:22:49.087 verify=crc32c-intel 00:22:49.087 [job0] 00:22:49.087 filename=/dev/nvme0n1 00:22:49.087 Could not set queue depth (nvme0n1) 00:22:49.654 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:22:49.654 fio-3.35 00:22:49.654 Starting 1 thread 00:22:52.190 16:13:22 -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:22:52.190 16:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.190 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.190 true 00:22:52.190 16:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.190 16:13:22 -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:22:52.190 16:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.190 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.190 true 00:22:52.190 16:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.190 16:13:22 -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:22:52.190 16:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.190 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.190 true 00:22:52.190 16:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.190 16:13:22 -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:22:52.190 16:13:22 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:52.190 16:13:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.190 true 00:22:52.190 16:13:22 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:52.190 16:13:22 -- target/initiator_timeout.sh@45 -- # sleep 3 00:22:55.479 16:13:25 -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:22:55.479 16:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.479 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.479 true 00:22:55.479 16:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.479 16:13:25 -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:22:55.479 16:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.479 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.479 true 00:22:55.479 16:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.479 16:13:25 -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:22:55.479 16:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.479 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.479 true 00:22:55.479 16:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.479 16:13:25 -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:22:55.479 16:13:25 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:55.479 16:13:25 -- common/autotest_common.sh@10 -- # set +x 00:22:55.479 true 00:22:55.479 16:13:25 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:55.479 16:13:25 -- target/initiator_timeout.sh@53 -- # fio_status=0 00:22:55.479 16:13:25 -- target/initiator_timeout.sh@54 -- # wait 1417941 00:23:51.718 00:23:51.718 job0: (groupid=0, jobs=1): err= 0: pid=1418096: Wed Nov 20 16:14:20 2024 00:23:51.718 read: IOPS=1246, BW=4986KiB/s (5106kB/s)(292MiB/60000msec) 00:23:51.718 slat (usec): min=6, max=14685, avg= 9.48, stdev=71.26 00:23:51.718 clat (usec): min=37, max=42331k, avg=671.91, stdev=154786.62 00:23:51.718 lat (usec): min=94, max=42331k, avg=681.39, stdev=154786.64 00:23:51.718 clat percentiles (usec): 00:23:51.718 | 1.00th=[ 92], 5.00th=[ 96], 10.00th=[ 98], 20.00th=[ 101], 00:23:51.718 | 30.00th=[ 102], 40.00th=[ 104], 50.00th=[ 106], 60.00th=[ 108], 00:23:51.718 | 70.00th=[ 110], 80.00th=[ 112], 90.00th=[ 115], 95.00th=[ 118], 00:23:51.718 | 99.00th=[ 123], 99.50th=[ 125], 99.90th=[ 130], 99.95th=[ 137], 00:23:51.718 | 99.99th=[ 208] 00:23:51.718 write: IOPS=1254, BW=5018KiB/s (5138kB/s)(294MiB/60000msec); 0 zone resets 00:23:51.718 slat (usec): min=8, max=962, avg=11.86, stdev= 4.03 00:23:51.718 clat (usec): min=34, max=1605, avg=103.05, stdev= 9.00 00:23:51.718 lat (usec): min=91, max=1618, avg=114.91, stdev= 9.77 00:23:51.718 clat percentiles (usec): 00:23:51.718 | 1.00th=[ 89], 5.00th=[ 92], 10.00th=[ 95], 20.00th=[ 98], 00:23:51.718 | 30.00th=[ 100], 40.00th=[ 101], 50.00th=[ 103], 60.00th=[ 104], 00:23:51.718 | 70.00th=[ 106], 80.00th=[ 109], 90.00th=[ 113], 95.00th=[ 115], 00:23:51.718 | 99.00th=[ 120], 99.50th=[ 122], 99.90th=[ 129], 99.95th=[ 143], 00:23:51.718 | 99.99th=[ 265] 00:23:51.718 bw ( KiB/s): min= 2192, max=18288, per=100.00%, avg=16270.22, stdev=3290.24, samples=36 00:23:51.718 iops : min= 548, max= 4572, avg=4067.50, stdev=822.54, samples=36 00:23:51.718 lat (usec) : 50=0.01%, 100=25.85%, 250=74.13%, 500=0.01% 00:23:51.718 lat (msec) : 2=0.01%, >=2000=0.01% 00:23:51.718 cpu : usr=1.96%, sys=3.19%, ctx=150063, majf=0, minf=138 00:23:51.718 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:23:51.718 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.718 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:51.718 issued rwts: total=74791,75264,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:51.718 latency : target=0, window=0, percentile=100.00%, depth=1 00:23:51.718 00:23:51.718 Run status group 0 (all jobs): 00:23:51.718 READ: bw=4986KiB/s (5106kB/s), 4986KiB/s-4986KiB/s (5106kB/s-5106kB/s), io=292MiB (306MB), run=60000-60000msec 00:23:51.718 WRITE: bw=5018KiB/s (5138kB/s), 5018KiB/s-5018KiB/s (5138kB/s-5138kB/s), io=294MiB (308MB), run=60000-60000msec 00:23:51.718 00:23:51.718 Disk stats (read/write): 00:23:51.718 nvme0n1: ios=74744/74752, merge=0/0, ticks=7181/7039, in_queue=14220, util=99.49% 00:23:51.718 16:14:20 -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:51.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:23:51.718 16:14:21 -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:51.718 16:14:21 -- common/autotest_common.sh@1208 -- # local i=0 00:23:51.718 16:14:21 -- common/autotest_common.sh@1209 -- # lsblk -o NAME,SERIAL 00:23:51.718 16:14:21 -- common/autotest_common.sh@1209 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:51.718 16:14:21 -- common/autotest_common.sh@1216 -- # lsblk -l -o NAME,SERIAL 00:23:51.718 16:14:21 -- common/autotest_common.sh@1216 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:51.718 16:14:21 -- common/autotest_common.sh@1220 -- # return 0 00:23:51.718 16:14:21 -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:23:51.718 16:14:21 -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:23:51.718 nvmf hotplug test: fio successful as expected 00:23:51.718 16:14:21 -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:51.718 16:14:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:51.718 16:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:51.718 16:14:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:51.718 16:14:21 -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:23:51.718 16:14:21 -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:23:51.718 16:14:21 -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:23:51.718 16:14:21 -- nvmf/common.sh@476 -- # nvmfcleanup 00:23:51.718 16:14:21 -- nvmf/common.sh@116 -- # sync 00:23:51.718 16:14:21 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:23:51.718 16:14:21 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:23:51.718 16:14:21 -- nvmf/common.sh@119 -- # set +e 00:23:51.718 16:14:21 -- nvmf/common.sh@120 -- # for i in {1..20} 00:23:51.718 16:14:21 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:23:51.718 rmmod nvme_rdma 00:23:51.718 rmmod nvme_fabrics 00:23:51.718 16:14:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:23:51.718 16:14:21 -- nvmf/common.sh@123 -- # set -e 00:23:51.718 16:14:21 -- nvmf/common.sh@124 -- # return 0 00:23:51.718 16:14:21 -- nvmf/common.sh@477 -- # '[' -n 1417127 ']' 00:23:51.718 16:14:21 -- nvmf/common.sh@478 -- # killprocess 1417127 00:23:51.718 16:14:21 -- common/autotest_common.sh@936 -- # '[' -z 1417127 ']' 00:23:51.718 16:14:21 -- common/autotest_common.sh@940 -- # kill -0 1417127 00:23:51.718 16:14:21 -- common/autotest_common.sh@941 -- # uname 00:23:51.718 16:14:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:51.718 16:14:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1417127 00:23:51.718 16:14:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:51.718 16:14:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:51.718 16:14:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1417127' 00:23:51.718 killing process with pid 1417127 00:23:51.718 16:14:21 -- common/autotest_common.sh@955 -- # kill 1417127 00:23:51.718 16:14:21 -- common/autotest_common.sh@960 -- # wait 1417127 00:23:51.718 16:14:21 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:23:51.718 16:14:21 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:23:51.718 00:23:51.718 real 1m12.729s 00:23:51.718 user 4m34.605s 00:23:51.718 sys 0m7.764s 00:23:51.718 16:14:21 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:23:51.719 16:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:51.719 ************************************ 00:23:51.719 END TEST nvmf_initiator_timeout 00:23:51.719 ************************************ 00:23:51.719 16:14:21 -- nvmf/nvmf.sh@69 -- # [[ phy == phy ]] 00:23:51.719 16:14:21 -- nvmf/nvmf.sh@70 -- # '[' rdma = tcp ']' 00:23:51.719 16:14:21 -- nvmf/nvmf.sh@76 -- # [[ '' -eq 1 ]] 00:23:51.719 16:14:21 -- nvmf/nvmf.sh@81 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:51.719 16:14:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:51.719 16:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:51.719 16:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:51.719 ************************************ 00:23:51.719 START TEST nvmf_shutdown 00:23:51.719 ************************************ 00:23:51.719 16:14:21 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=rdma 00:23:51.719 * Looking for test storage... 00:23:51.719 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target 00:23:51.719 16:14:21 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:23:51.719 16:14:21 -- common/autotest_common.sh@1690 -- # lcov --version 00:23:51.719 16:14:21 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:23:51.719 16:14:21 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:23:51.719 16:14:21 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:23:51.719 16:14:21 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:23:51.719 16:14:21 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:23:51.719 16:14:21 -- scripts/common.sh@335 -- # IFS=.-: 00:23:51.719 16:14:21 -- scripts/common.sh@335 -- # read -ra ver1 00:23:51.719 16:14:21 -- scripts/common.sh@336 -- # IFS=.-: 00:23:51.719 16:14:21 -- scripts/common.sh@336 -- # read -ra ver2 00:23:51.719 16:14:21 -- scripts/common.sh@337 -- # local 'op=<' 00:23:51.719 16:14:21 -- scripts/common.sh@339 -- # ver1_l=2 00:23:51.719 16:14:21 -- scripts/common.sh@340 -- # ver2_l=1 00:23:51.719 16:14:21 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:23:51.719 16:14:21 -- scripts/common.sh@343 -- # case "$op" in 00:23:51.719 16:14:21 -- scripts/common.sh@344 -- # : 1 00:23:51.719 16:14:21 -- scripts/common.sh@363 -- # (( v = 0 )) 00:23:51.719 16:14:21 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:51.719 16:14:21 -- scripts/common.sh@364 -- # decimal 1 00:23:51.719 16:14:21 -- scripts/common.sh@352 -- # local d=1 00:23:51.719 16:14:21 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:51.719 16:14:21 -- scripts/common.sh@354 -- # echo 1 00:23:51.719 16:14:21 -- scripts/common.sh@364 -- # ver1[v]=1 00:23:51.719 16:14:21 -- scripts/common.sh@365 -- # decimal 2 00:23:51.719 16:14:21 -- scripts/common.sh@352 -- # local d=2 00:23:51.719 16:14:21 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:51.719 16:14:21 -- scripts/common.sh@354 -- # echo 2 00:23:51.719 16:14:21 -- scripts/common.sh@365 -- # ver2[v]=2 00:23:51.719 16:14:21 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:23:51.719 16:14:21 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:23:51.719 16:14:21 -- scripts/common.sh@367 -- # return 0 00:23:51.719 16:14:21 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:51.719 16:14:21 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:23:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.719 --rc genhtml_branch_coverage=1 00:23:51.719 --rc genhtml_function_coverage=1 00:23:51.719 --rc genhtml_legend=1 00:23:51.719 --rc geninfo_all_blocks=1 00:23:51.719 --rc geninfo_unexecuted_blocks=1 00:23:51.719 00:23:51.719 ' 00:23:51.719 16:14:21 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:23:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.719 --rc genhtml_branch_coverage=1 00:23:51.719 --rc genhtml_function_coverage=1 00:23:51.719 --rc genhtml_legend=1 00:23:51.719 --rc geninfo_all_blocks=1 00:23:51.719 --rc geninfo_unexecuted_blocks=1 00:23:51.719 00:23:51.719 ' 00:23:51.719 16:14:21 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:23:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.719 --rc genhtml_branch_coverage=1 00:23:51.719 --rc genhtml_function_coverage=1 00:23:51.719 --rc genhtml_legend=1 00:23:51.719 --rc geninfo_all_blocks=1 00:23:51.719 --rc geninfo_unexecuted_blocks=1 00:23:51.719 00:23:51.719 ' 00:23:51.719 16:14:21 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:23:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:51.719 --rc genhtml_branch_coverage=1 00:23:51.719 --rc genhtml_function_coverage=1 00:23:51.719 --rc genhtml_legend=1 00:23:51.719 --rc geninfo_all_blocks=1 00:23:51.719 --rc geninfo_unexecuted_blocks=1 00:23:51.719 00:23:51.719 ' 00:23:51.719 16:14:21 -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:23:51.719 16:14:21 -- nvmf/common.sh@7 -- # uname -s 00:23:51.719 16:14:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:51.719 16:14:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:51.719 16:14:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:51.719 16:14:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:51.719 16:14:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:51.719 16:14:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:51.719 16:14:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:51.719 16:14:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:51.719 16:14:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:51.719 16:14:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:51.719 16:14:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:23:51.719 16:14:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:23:51.719 16:14:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:51.719 16:14:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:51.719 16:14:21 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:51.719 16:14:21 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:23:51.719 16:14:21 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:51.719 16:14:21 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:51.719 16:14:21 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:51.719 16:14:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.719 16:14:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.719 16:14:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.719 16:14:21 -- paths/export.sh@5 -- # export PATH 00:23:51.719 16:14:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:51.719 16:14:21 -- nvmf/common.sh@46 -- # : 0 00:23:51.719 16:14:21 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:23:51.719 16:14:21 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:23:51.719 16:14:21 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:23:51.719 16:14:21 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:51.719 16:14:21 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:51.719 16:14:21 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:23:51.719 16:14:21 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:23:51.719 16:14:21 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:23:51.719 16:14:21 -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:51.719 16:14:21 -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:51.719 16:14:21 -- target/shutdown.sh@146 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:51.719 16:14:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:23:51.719 16:14:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:51.719 16:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:51.719 ************************************ 00:23:51.719 START TEST nvmf_shutdown_tc1 00:23:51.719 ************************************ 00:23:51.719 16:14:21 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc1 00:23:51.719 16:14:21 -- target/shutdown.sh@74 -- # starttarget 00:23:51.719 16:14:21 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:51.719 16:14:21 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:23:51.719 16:14:21 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:51.719 16:14:21 -- nvmf/common.sh@436 -- # prepare_net_devs 00:23:51.719 16:14:21 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:23:51.719 16:14:21 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:23:51.719 16:14:21 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:51.719 16:14:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:51.719 16:14:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:51.719 16:14:21 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:23:51.719 16:14:21 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:23:51.719 16:14:21 -- nvmf/common.sh@284 -- # xtrace_disable 00:23:51.719 16:14:21 -- common/autotest_common.sh@10 -- # set +x 00:23:58.291 16:14:28 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:23:58.291 16:14:28 -- nvmf/common.sh@290 -- # pci_devs=() 00:23:58.291 16:14:28 -- nvmf/common.sh@290 -- # local -a pci_devs 00:23:58.291 16:14:28 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:23:58.291 16:14:28 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:23:58.291 16:14:28 -- nvmf/common.sh@292 -- # pci_drivers=() 00:23:58.291 16:14:28 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:23:58.291 16:14:28 -- nvmf/common.sh@294 -- # net_devs=() 00:23:58.291 16:14:28 -- nvmf/common.sh@294 -- # local -ga net_devs 00:23:58.291 16:14:28 -- nvmf/common.sh@295 -- # e810=() 00:23:58.291 16:14:28 -- nvmf/common.sh@295 -- # local -ga e810 00:23:58.291 16:14:28 -- nvmf/common.sh@296 -- # x722=() 00:23:58.291 16:14:28 -- nvmf/common.sh@296 -- # local -ga x722 00:23:58.291 16:14:28 -- nvmf/common.sh@297 -- # mlx=() 00:23:58.291 16:14:28 -- nvmf/common.sh@297 -- # local -ga mlx 00:23:58.291 16:14:28 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:58.291 16:14:28 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:23:58.291 16:14:28 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:23:58.291 16:14:28 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:23:58.291 16:14:28 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:23:58.291 16:14:28 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:23:58.291 16:14:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:58.291 16:14:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:23:58.291 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:23:58.291 16:14:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:58.291 16:14:28 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:23:58.291 16:14:28 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:23:58.291 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:23:58.291 16:14:28 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:23:58.291 16:14:28 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:23:58.291 16:14:28 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:58.291 16:14:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.291 16:14:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:58.291 16:14:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.291 16:14:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:23:58.291 Found net devices under 0000:d9:00.0: mlx_0_0 00:23:58.291 16:14:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.291 16:14:28 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:23:58.291 16:14:28 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:58.291 16:14:28 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:23:58.291 16:14:28 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:58.291 16:14:28 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:23:58.291 Found net devices under 0000:d9:00.1: mlx_0_1 00:23:58.291 16:14:28 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:23:58.291 16:14:28 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:23:58.291 16:14:28 -- nvmf/common.sh@402 -- # is_hw=yes 00:23:58.291 16:14:28 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@408 -- # rdma_device_init 00:23:58.291 16:14:28 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:23:58.291 16:14:28 -- nvmf/common.sh@57 -- # uname 00:23:58.291 16:14:28 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:23:58.291 16:14:28 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:23:58.291 16:14:28 -- nvmf/common.sh@62 -- # modprobe ib_core 00:23:58.291 16:14:28 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:23:58.291 16:14:28 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:23:58.291 16:14:28 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:23:58.291 16:14:28 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:23:58.291 16:14:28 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:23:58.291 16:14:28 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:23:58.291 16:14:28 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:23:58.291 16:14:28 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:23:58.291 16:14:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:58.291 16:14:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:58.291 16:14:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:58.291 16:14:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:58.291 16:14:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:58.291 16:14:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:58.291 16:14:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.291 16:14:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:58.291 16:14:28 -- nvmf/common.sh@104 -- # continue 2 00:23:58.291 16:14:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:58.291 16:14:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.291 16:14:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.291 16:14:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:58.291 16:14:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:58.292 16:14:28 -- nvmf/common.sh@104 -- # continue 2 00:23:58.292 16:14:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:58.292 16:14:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:23:58.292 16:14:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:58.292 16:14:28 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:23:58.292 16:14:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:23:58.292 16:14:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:23:58.292 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:58.292 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:23:58.292 altname enp217s0f0np0 00:23:58.292 altname ens818f0np0 00:23:58.292 inet 192.168.100.8/24 scope global mlx_0_0 00:23:58.292 valid_lft forever preferred_lft forever 00:23:58.292 16:14:28 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:23:58.292 16:14:28 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:23:58.292 16:14:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:58.292 16:14:28 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:23:58.292 16:14:28 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:23:58.292 16:14:28 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:23:58.292 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:23:58.292 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:23:58.292 altname enp217s0f1np1 00:23:58.292 altname ens818f1np1 00:23:58.292 inet 192.168.100.9/24 scope global mlx_0_1 00:23:58.292 valid_lft forever preferred_lft forever 00:23:58.292 16:14:28 -- nvmf/common.sh@410 -- # return 0 00:23:58.292 16:14:28 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:23:58.292 16:14:28 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:23:58.292 16:14:28 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:23:58.292 16:14:28 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:23:58.292 16:14:28 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:23:58.292 16:14:28 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:23:58.292 16:14:28 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:23:58.292 16:14:28 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:23:58.292 16:14:28 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:23:58.292 16:14:28 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:23:58.292 16:14:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:58.292 16:14:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.292 16:14:28 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:23:58.292 16:14:28 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:23:58.292 16:14:28 -- nvmf/common.sh@104 -- # continue 2 00:23:58.292 16:14:28 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:23:58.292 16:14:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.292 16:14:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:23:58.292 16:14:28 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:23:58.292 16:14:28 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:23:58.292 16:14:28 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:23:58.292 16:14:28 -- nvmf/common.sh@104 -- # continue 2 00:23:58.292 16:14:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:58.292 16:14:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:23:58.292 16:14:28 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:58.292 16:14:28 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:23:58.292 16:14:28 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:23:58.292 16:14:28 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:23:58.292 16:14:28 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:23:58.292 16:14:28 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:23:58.292 192.168.100.9' 00:23:58.292 16:14:28 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:23:58.292 192.168.100.9' 00:23:58.292 16:14:28 -- nvmf/common.sh@445 -- # head -n 1 00:23:58.292 16:14:28 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:23:58.292 16:14:28 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:23:58.292 192.168.100.9' 00:23:58.292 16:14:28 -- nvmf/common.sh@446 -- # tail -n +2 00:23:58.292 16:14:28 -- nvmf/common.sh@446 -- # head -n 1 00:23:58.292 16:14:28 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:23:58.292 16:14:28 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:23:58.292 16:14:28 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:23:58.292 16:14:28 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:23:58.292 16:14:28 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:23:58.292 16:14:28 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:23:58.292 16:14:28 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:58.292 16:14:28 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:23:58.292 16:14:28 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:58.292 16:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.292 16:14:28 -- nvmf/common.sh@469 -- # nvmfpid=1431636 00:23:58.292 16:14:28 -- nvmf/common.sh@470 -- # waitforlisten 1431636 00:23:58.292 16:14:28 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:58.292 16:14:28 -- common/autotest_common.sh@829 -- # '[' -z 1431636 ']' 00:23:58.292 16:14:28 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:58.292 16:14:28 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:58.292 16:14:28 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:58.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:58.292 16:14:28 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:58.292 16:14:28 -- common/autotest_common.sh@10 -- # set +x 00:23:58.292 [2024-11-20 16:14:28.754371] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:58.292 [2024-11-20 16:14:28.754424] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:58.292 EAL: No free 2048 kB hugepages reported on node 1 00:23:58.292 [2024-11-20 16:14:28.825107] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.292 [2024-11-20 16:14:28.861768] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:23:58.292 [2024-11-20 16:14:28.861884] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.292 [2024-11-20 16:14:28.861894] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.292 [2024-11-20 16:14:28.861903] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.292 [2024-11-20 16:14:28.862010] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.292 [2024-11-20 16:14:28.862092] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:58.292 [2024-11-20 16:14:28.862185] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.292 [2024-11-20 16:14:28.862186] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:23:58.860 16:14:29 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:58.860 16:14:29 -- common/autotest_common.sh@862 -- # return 0 00:23:58.860 16:14:29 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:23:58.860 16:14:29 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:58.860 16:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:58.860 16:14:29 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.860 16:14:29 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:23:58.860 16:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:58.860 16:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:58.860 [2024-11-20 16:14:29.648362] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x15ac3c0/0x15b0890) succeed. 00:23:58.860 [2024-11-20 16:14:29.657624] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x15ad960/0x15f1f30) succeed. 00:23:59.119 16:14:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.119 16:14:29 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:59.119 16:14:29 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:59.119 16:14:29 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:59.119 16:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.119 16:14:29 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:59.119 16:14:29 -- target/shutdown.sh@28 -- # cat 00:23:59.119 16:14:29 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:59.119 16:14:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:59.119 16:14:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.119 Malloc1 00:23:59.119 [2024-11-20 16:14:29.883846] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:23:59.119 Malloc2 00:23:59.378 Malloc3 00:23:59.378 Malloc4 00:23:59.378 Malloc5 00:23:59.378 Malloc6 00:23:59.378 Malloc7 00:23:59.378 Malloc8 00:23:59.638 Malloc9 00:23:59.638 Malloc10 00:23:59.638 16:14:30 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:59.638 16:14:30 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:59.638 16:14:30 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:59.638 16:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 16:14:30 -- target/shutdown.sh@78 -- # perfpid=1431961 00:23:59.638 16:14:30 -- target/shutdown.sh@79 -- # waitforlisten 1431961 /var/tmp/bdevperf.sock 00:23:59.638 16:14:30 -- common/autotest_common.sh@829 -- # '[' -z 1431961 ']' 00:23:59.638 16:14:30 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.638 16:14:30 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:59.638 16:14:30 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:59.638 16:14:30 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:59.638 16:14:30 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.638 16:14:30 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:59.638 16:14:30 -- nvmf/common.sh@520 -- # config=() 00:23:59.638 16:14:30 -- common/autotest_common.sh@10 -- # set +x 00:23:59.638 16:14:30 -- nvmf/common.sh@520 -- # local subsystem config 00:23:59.638 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.638 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.638 { 00:23:59.638 "params": { 00:23:59.638 "name": "Nvme$subsystem", 00:23:59.638 "trtype": "$TEST_TRANSPORT", 00:23:59.638 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.638 "adrfam": "ipv4", 00:23:59.638 "trsvcid": "$NVMF_PORT", 00:23:59.638 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.638 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.638 "hdgst": ${hdgst:-false}, 00:23:59.638 "ddgst": ${ddgst:-false} 00:23:59.638 }, 00:23:59.638 "method": "bdev_nvme_attach_controller" 00:23:59.638 } 00:23:59.638 EOF 00:23:59.638 )") 00:23:59.638 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.639 { 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme$subsystem", 00:23:59.639 "trtype": "$TEST_TRANSPORT", 00:23:59.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "$NVMF_PORT", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.639 "hdgst": ${hdgst:-false}, 00:23:59.639 "ddgst": ${ddgst:-false} 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 } 00:23:59.639 EOF 00:23:59.639 )") 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.639 { 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme$subsystem", 00:23:59.639 "trtype": "$TEST_TRANSPORT", 00:23:59.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "$NVMF_PORT", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.639 "hdgst": ${hdgst:-false}, 00:23:59.639 "ddgst": ${ddgst:-false} 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 } 00:23:59.639 EOF 00:23:59.639 )") 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.639 { 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme$subsystem", 00:23:59.639 "trtype": "$TEST_TRANSPORT", 00:23:59.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "$NVMF_PORT", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.639 "hdgst": ${hdgst:-false}, 00:23:59.639 "ddgst": ${ddgst:-false} 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 } 00:23:59.639 EOF 00:23:59.639 )") 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.639 { 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme$subsystem", 00:23:59.639 "trtype": "$TEST_TRANSPORT", 00:23:59.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "$NVMF_PORT", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.639 "hdgst": ${hdgst:-false}, 00:23:59.639 "ddgst": ${ddgst:-false} 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 } 00:23:59.639 EOF 00:23:59.639 )") 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 [2024-11-20 16:14:30.370085] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:23:59.639 [2024-11-20 16:14:30.370139] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:59.639 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.639 { 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme$subsystem", 00:23:59.639 "trtype": "$TEST_TRANSPORT", 00:23:59.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "$NVMF_PORT", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.639 "hdgst": ${hdgst:-false}, 00:23:59.639 "ddgst": ${ddgst:-false} 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 } 00:23:59.639 EOF 00:23:59.639 )") 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.639 { 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme$subsystem", 00:23:59.639 "trtype": "$TEST_TRANSPORT", 00:23:59.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "$NVMF_PORT", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.639 "hdgst": ${hdgst:-false}, 00:23:59.639 "ddgst": ${ddgst:-false} 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 } 00:23:59.639 EOF 00:23:59.639 )") 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.639 { 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme$subsystem", 00:23:59.639 "trtype": "$TEST_TRANSPORT", 00:23:59.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "$NVMF_PORT", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.639 "hdgst": ${hdgst:-false}, 00:23:59.639 "ddgst": ${ddgst:-false} 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 } 00:23:59.639 EOF 00:23:59.639 )") 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.639 { 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme$subsystem", 00:23:59.639 "trtype": "$TEST_TRANSPORT", 00:23:59.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "$NVMF_PORT", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.639 "hdgst": ${hdgst:-false}, 00:23:59.639 "ddgst": ${ddgst:-false} 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 } 00:23:59.639 EOF 00:23:59.639 )") 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 16:14:30 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:23:59.639 { 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme$subsystem", 00:23:59.639 "trtype": "$TEST_TRANSPORT", 00:23:59.639 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "$NVMF_PORT", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.639 "hdgst": ${hdgst:-false}, 00:23:59.639 "ddgst": ${ddgst:-false} 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 } 00:23:59.639 EOF 00:23:59.639 )") 00:23:59.639 16:14:30 -- nvmf/common.sh@542 -- # cat 00:23:59.639 EAL: No free 2048 kB hugepages reported on node 1 00:23:59.639 16:14:30 -- nvmf/common.sh@544 -- # jq . 00:23:59.639 16:14:30 -- nvmf/common.sh@545 -- # IFS=, 00:23:59.639 16:14:30 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme1", 00:23:59.639 "trtype": "rdma", 00:23:59.639 "traddr": "192.168.100.8", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "4420", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.639 "hdgst": false, 00:23:59.639 "ddgst": false 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 },{ 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme2", 00:23:59.639 "trtype": "rdma", 00:23:59.639 "traddr": "192.168.100.8", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "4420", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:59.639 "hdgst": false, 00:23:59.639 "ddgst": false 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 },{ 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme3", 00:23:59.639 "trtype": "rdma", 00:23:59.639 "traddr": "192.168.100.8", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "4420", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:59.639 "hdgst": false, 00:23:59.639 "ddgst": false 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 },{ 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme4", 00:23:59.639 "trtype": "rdma", 00:23:59.639 "traddr": "192.168.100.8", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "4420", 00:23:59.639 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:59.639 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:59.639 "hdgst": false, 00:23:59.639 "ddgst": false 00:23:59.639 }, 00:23:59.639 "method": "bdev_nvme_attach_controller" 00:23:59.639 },{ 00:23:59.639 "params": { 00:23:59.639 "name": "Nvme5", 00:23:59.639 "trtype": "rdma", 00:23:59.639 "traddr": "192.168.100.8", 00:23:59.639 "adrfam": "ipv4", 00:23:59.639 "trsvcid": "4420", 00:23:59.640 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:59.640 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:59.640 "hdgst": false, 00:23:59.640 "ddgst": false 00:23:59.640 }, 00:23:59.640 "method": "bdev_nvme_attach_controller" 00:23:59.640 },{ 00:23:59.640 "params": { 00:23:59.640 "name": "Nvme6", 00:23:59.640 "trtype": "rdma", 00:23:59.640 "traddr": "192.168.100.8", 00:23:59.640 "adrfam": "ipv4", 00:23:59.640 "trsvcid": "4420", 00:23:59.640 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:59.640 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:59.640 "hdgst": false, 00:23:59.640 "ddgst": false 00:23:59.640 }, 00:23:59.640 "method": "bdev_nvme_attach_controller" 00:23:59.640 },{ 00:23:59.640 "params": { 00:23:59.640 "name": "Nvme7", 00:23:59.640 "trtype": "rdma", 00:23:59.640 "traddr": "192.168.100.8", 00:23:59.640 "adrfam": "ipv4", 00:23:59.640 "trsvcid": "4420", 00:23:59.640 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:59.640 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:59.640 "hdgst": false, 00:23:59.640 "ddgst": false 00:23:59.640 }, 00:23:59.640 "method": "bdev_nvme_attach_controller" 00:23:59.640 },{ 00:23:59.640 "params": { 00:23:59.640 "name": "Nvme8", 00:23:59.640 "trtype": "rdma", 00:23:59.640 "traddr": "192.168.100.8", 00:23:59.640 "adrfam": "ipv4", 00:23:59.640 "trsvcid": "4420", 00:23:59.640 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:59.640 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:59.640 "hdgst": false, 00:23:59.640 "ddgst": false 00:23:59.640 }, 00:23:59.640 "method": "bdev_nvme_attach_controller" 00:23:59.640 },{ 00:23:59.640 "params": { 00:23:59.640 "name": "Nvme9", 00:23:59.640 "trtype": "rdma", 00:23:59.640 "traddr": "192.168.100.8", 00:23:59.640 "adrfam": "ipv4", 00:23:59.640 "trsvcid": "4420", 00:23:59.640 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:59.640 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:59.640 "hdgst": false, 00:23:59.640 "ddgst": false 00:23:59.640 }, 00:23:59.640 "method": "bdev_nvme_attach_controller" 00:23:59.640 },{ 00:23:59.640 "params": { 00:23:59.640 "name": "Nvme10", 00:23:59.640 "trtype": "rdma", 00:23:59.640 "traddr": "192.168.100.8", 00:23:59.640 "adrfam": "ipv4", 00:23:59.640 "trsvcid": "4420", 00:23:59.640 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:59.640 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:59.640 "hdgst": false, 00:23:59.640 "ddgst": false 00:23:59.640 }, 00:23:59.640 "method": "bdev_nvme_attach_controller" 00:23:59.640 }' 00:23:59.899 [2024-11-20 16:14:30.446530] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.899 [2024-11-20 16:14:30.483916] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.279 16:14:31 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:01.279 16:14:31 -- common/autotest_common.sh@862 -- # return 0 00:24:01.279 16:14:31 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:01.279 16:14:31 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:01.279 16:14:31 -- common/autotest_common.sh@10 -- # set +x 00:24:01.279 16:14:31 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:01.279 16:14:31 -- target/shutdown.sh@83 -- # kill -9 1431961 00:24:01.279 16:14:31 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:24:01.279 16:14:31 -- target/shutdown.sh@87 -- # sleep 1 00:24:02.218 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1431961 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:02.218 16:14:32 -- target/shutdown.sh@88 -- # kill -0 1431636 00:24:02.218 16:14:32 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:02.218 16:14:32 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:02.218 16:14:32 -- nvmf/common.sh@520 -- # config=() 00:24:02.218 16:14:32 -- nvmf/common.sh@520 -- # local subsystem config 00:24:02.218 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.218 { 00:24:02.218 "params": { 00:24:02.218 "name": "Nvme$subsystem", 00:24:02.218 "trtype": "$TEST_TRANSPORT", 00:24:02.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.218 "adrfam": "ipv4", 00:24:02.218 "trsvcid": "$NVMF_PORT", 00:24:02.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.218 "hdgst": ${hdgst:-false}, 00:24:02.218 "ddgst": ${ddgst:-false} 00:24:02.218 }, 00:24:02.218 "method": "bdev_nvme_attach_controller" 00:24:02.218 } 00:24:02.218 EOF 00:24:02.218 )") 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.218 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.218 { 00:24:02.218 "params": { 00:24:02.218 "name": "Nvme$subsystem", 00:24:02.218 "trtype": "$TEST_TRANSPORT", 00:24:02.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.218 "adrfam": "ipv4", 00:24:02.218 "trsvcid": "$NVMF_PORT", 00:24:02.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.218 "hdgst": ${hdgst:-false}, 00:24:02.218 "ddgst": ${ddgst:-false} 00:24:02.218 }, 00:24:02.218 "method": "bdev_nvme_attach_controller" 00:24:02.218 } 00:24:02.218 EOF 00:24:02.218 )") 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.218 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.218 { 00:24:02.218 "params": { 00:24:02.218 "name": "Nvme$subsystem", 00:24:02.218 "trtype": "$TEST_TRANSPORT", 00:24:02.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.218 "adrfam": "ipv4", 00:24:02.218 "trsvcid": "$NVMF_PORT", 00:24:02.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.218 "hdgst": ${hdgst:-false}, 00:24:02.218 "ddgst": ${ddgst:-false} 00:24:02.218 }, 00:24:02.218 "method": "bdev_nvme_attach_controller" 00:24:02.218 } 00:24:02.218 EOF 00:24:02.218 )") 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.218 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.218 { 00:24:02.218 "params": { 00:24:02.218 "name": "Nvme$subsystem", 00:24:02.218 "trtype": "$TEST_TRANSPORT", 00:24:02.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.218 "adrfam": "ipv4", 00:24:02.218 "trsvcid": "$NVMF_PORT", 00:24:02.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.218 "hdgst": ${hdgst:-false}, 00:24:02.218 "ddgst": ${ddgst:-false} 00:24:02.218 }, 00:24:02.218 "method": "bdev_nvme_attach_controller" 00:24:02.218 } 00:24:02.218 EOF 00:24:02.218 )") 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.218 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.218 { 00:24:02.218 "params": { 00:24:02.218 "name": "Nvme$subsystem", 00:24:02.218 "trtype": "$TEST_TRANSPORT", 00:24:02.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.218 "adrfam": "ipv4", 00:24:02.218 "trsvcid": "$NVMF_PORT", 00:24:02.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.218 "hdgst": ${hdgst:-false}, 00:24:02.218 "ddgst": ${ddgst:-false} 00:24:02.218 }, 00:24:02.218 "method": "bdev_nvme_attach_controller" 00:24:02.218 } 00:24:02.218 EOF 00:24:02.218 )") 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.218 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.218 { 00:24:02.218 "params": { 00:24:02.218 "name": "Nvme$subsystem", 00:24:02.218 "trtype": "$TEST_TRANSPORT", 00:24:02.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.218 "adrfam": "ipv4", 00:24:02.218 "trsvcid": "$NVMF_PORT", 00:24:02.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.218 "hdgst": ${hdgst:-false}, 00:24:02.218 "ddgst": ${ddgst:-false} 00:24:02.218 }, 00:24:02.218 "method": "bdev_nvme_attach_controller" 00:24:02.218 } 00:24:02.218 EOF 00:24:02.218 )") 00:24:02.218 [2024-11-20 16:14:32.928336] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:02.218 [2024-11-20 16:14:32.928390] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1432507 ] 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.218 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.218 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.218 { 00:24:02.218 "params": { 00:24:02.218 "name": "Nvme$subsystem", 00:24:02.218 "trtype": "$TEST_TRANSPORT", 00:24:02.218 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.218 "adrfam": "ipv4", 00:24:02.218 "trsvcid": "$NVMF_PORT", 00:24:02.218 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.218 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.218 "hdgst": ${hdgst:-false}, 00:24:02.218 "ddgst": ${ddgst:-false} 00:24:02.218 }, 00:24:02.218 "method": "bdev_nvme_attach_controller" 00:24:02.218 } 00:24:02.218 EOF 00:24:02.218 )") 00:24:02.219 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.219 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.219 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.219 { 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme$subsystem", 00:24:02.219 "trtype": "$TEST_TRANSPORT", 00:24:02.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "$NVMF_PORT", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.219 "hdgst": ${hdgst:-false}, 00:24:02.219 "ddgst": ${ddgst:-false} 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 } 00:24:02.219 EOF 00:24:02.219 )") 00:24:02.219 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.219 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.219 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.219 { 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme$subsystem", 00:24:02.219 "trtype": "$TEST_TRANSPORT", 00:24:02.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "$NVMF_PORT", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.219 "hdgst": ${hdgst:-false}, 00:24:02.219 "ddgst": ${ddgst:-false} 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 } 00:24:02.219 EOF 00:24:02.219 )") 00:24:02.219 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.219 16:14:32 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:02.219 16:14:32 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:02.219 { 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme$subsystem", 00:24:02.219 "trtype": "$TEST_TRANSPORT", 00:24:02.219 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "$NVMF_PORT", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:02.219 "hdgst": ${hdgst:-false}, 00:24:02.219 "ddgst": ${ddgst:-false} 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 } 00:24:02.219 EOF 00:24:02.219 )") 00:24:02.219 16:14:32 -- nvmf/common.sh@542 -- # cat 00:24:02.219 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.219 16:14:32 -- nvmf/common.sh@544 -- # jq . 00:24:02.219 16:14:32 -- nvmf/common.sh@545 -- # IFS=, 00:24:02.219 16:14:32 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme1", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 },{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme2", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 },{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme3", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 },{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme4", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 },{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme5", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 },{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme6", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 },{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme7", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 },{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme8", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 },{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme9", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 },{ 00:24:02.219 "params": { 00:24:02.219 "name": "Nvme10", 00:24:02.219 "trtype": "rdma", 00:24:02.219 "traddr": "192.168.100.8", 00:24:02.219 "adrfam": "ipv4", 00:24:02.219 "trsvcid": "4420", 00:24:02.219 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:02.219 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:02.219 "hdgst": false, 00:24:02.219 "ddgst": false 00:24:02.219 }, 00:24:02.219 "method": "bdev_nvme_attach_controller" 00:24:02.219 }' 00:24:02.219 [2024-11-20 16:14:33.002477] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:02.479 [2024-11-20 16:14:33.039420] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:03.417 Running I/O for 1 seconds... 00:24:04.356 00:24:04.356 Latency(us) 00:24:04.356 [2024-11-20T15:14:35.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme1n1 : 1.10 712.28 44.52 0.00 0.00 88787.47 7392.46 118279.37 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme2n1 : 1.10 737.07 46.07 0.00 0.00 85206.12 7654.60 113246.21 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme3n1 : 1.10 750.04 46.88 0.00 0.00 83173.52 7864.32 75497.47 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme4n1 : 1.10 749.36 46.83 0.00 0.00 82781.10 8074.04 73819.75 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme5n1 : 1.10 748.68 46.79 0.00 0.00 82377.24 8283.75 72561.46 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme6n1 : 1.10 748.01 46.75 0.00 0.00 81975.29 8441.04 71303.17 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme7n1 : 1.10 747.33 46.71 0.00 0.00 81556.30 8650.75 70883.74 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme8n1 : 1.10 746.66 46.67 0.00 0.00 81134.33 8860.47 72561.46 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme9n1 : 1.11 745.99 46.62 0.00 0.00 80706.70 9017.75 74239.18 00:24:04.356 [2024-11-20T15:14:35.161Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:04.356 Verification LBA range: start 0x0 length 0x400 00:24:04.356 Nvme10n1 : 1.11 551.13 34.45 0.00 0.00 108410.98 7707.03 328833.43 00:24:04.356 [2024-11-20T15:14:35.161Z] =================================================================================================================== 00:24:04.356 [2024-11-20T15:14:35.161Z] Total : 7236.55 452.28 0.00 0.00 84980.03 7392.46 328833.43 00:24:04.616 16:14:35 -- target/shutdown.sh@93 -- # stoptarget 00:24:04.616 16:14:35 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:04.616 16:14:35 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:04.616 16:14:35 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:04.616 16:14:35 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:04.616 16:14:35 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:04.616 16:14:35 -- nvmf/common.sh@116 -- # sync 00:24:04.616 16:14:35 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:04.616 16:14:35 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:04.616 16:14:35 -- nvmf/common.sh@119 -- # set +e 00:24:04.616 16:14:35 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:04.616 16:14:35 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:04.616 rmmod nvme_rdma 00:24:04.616 rmmod nvme_fabrics 00:24:04.616 16:14:35 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:04.616 16:14:35 -- nvmf/common.sh@123 -- # set -e 00:24:04.616 16:14:35 -- nvmf/common.sh@124 -- # return 0 00:24:04.616 16:14:35 -- nvmf/common.sh@477 -- # '[' -n 1431636 ']' 00:24:04.616 16:14:35 -- nvmf/common.sh@478 -- # killprocess 1431636 00:24:04.616 16:14:35 -- common/autotest_common.sh@936 -- # '[' -z 1431636 ']' 00:24:04.616 16:14:35 -- common/autotest_common.sh@940 -- # kill -0 1431636 00:24:04.616 16:14:35 -- common/autotest_common.sh@941 -- # uname 00:24:04.616 16:14:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:04.616 16:14:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1431636 00:24:04.876 16:14:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:04.876 16:14:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:04.876 16:14:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1431636' 00:24:04.876 killing process with pid 1431636 00:24:04.876 16:14:35 -- common/autotest_common.sh@955 -- # kill 1431636 00:24:04.876 16:14:35 -- common/autotest_common.sh@960 -- # wait 1431636 00:24:05.182 16:14:35 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:05.182 16:14:35 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:05.182 00:24:05.182 real 0m13.916s 00:24:05.182 user 0m33.172s 00:24:05.182 sys 0m6.410s 00:24:05.182 16:14:35 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:05.182 16:14:35 -- common/autotest_common.sh@10 -- # set +x 00:24:05.182 ************************************ 00:24:05.182 END TEST nvmf_shutdown_tc1 00:24:05.182 ************************************ 00:24:05.182 16:14:35 -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:05.182 16:14:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:05.182 16:14:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:05.182 16:14:35 -- common/autotest_common.sh@10 -- # set +x 00:24:05.182 ************************************ 00:24:05.182 START TEST nvmf_shutdown_tc2 00:24:05.182 ************************************ 00:24:05.182 16:14:35 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc2 00:24:05.182 16:14:35 -- target/shutdown.sh@98 -- # starttarget 00:24:05.182 16:14:35 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:05.182 16:14:35 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:05.182 16:14:35 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.182 16:14:35 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:05.182 16:14:35 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:05.182 16:14:35 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:05.182 16:14:35 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:05.182 16:14:35 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:05.182 16:14:35 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:05.480 16:14:35 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:05.480 16:14:35 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:05.480 16:14:35 -- common/autotest_common.sh@10 -- # set +x 00:24:05.480 16:14:35 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:05.480 16:14:35 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:05.480 16:14:35 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:05.480 16:14:35 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:05.480 16:14:35 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:05.480 16:14:35 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:05.480 16:14:35 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:05.480 16:14:35 -- nvmf/common.sh@294 -- # net_devs=() 00:24:05.480 16:14:35 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:05.480 16:14:35 -- nvmf/common.sh@295 -- # e810=() 00:24:05.480 16:14:35 -- nvmf/common.sh@295 -- # local -ga e810 00:24:05.480 16:14:35 -- nvmf/common.sh@296 -- # x722=() 00:24:05.480 16:14:35 -- nvmf/common.sh@296 -- # local -ga x722 00:24:05.480 16:14:35 -- nvmf/common.sh@297 -- # mlx=() 00:24:05.480 16:14:35 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:05.480 16:14:35 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.480 16:14:35 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:05.480 16:14:35 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:05.480 16:14:35 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:05.480 16:14:35 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:05.480 16:14:35 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:05.480 16:14:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:05.480 16:14:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:05.480 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:05.480 16:14:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:05.480 16:14:35 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:05.480 16:14:35 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:05.480 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:05.480 16:14:35 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:05.480 16:14:35 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:05.480 16:14:35 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:05.480 16:14:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.480 16:14:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:05.480 16:14:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.480 16:14:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:05.480 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:05.480 16:14:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.480 16:14:35 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:05.480 16:14:35 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.480 16:14:35 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:05.480 16:14:35 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.480 16:14:35 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:05.480 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:05.480 16:14:35 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.480 16:14:35 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:05.480 16:14:35 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:05.480 16:14:35 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:05.480 16:14:35 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:05.480 16:14:35 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:05.480 16:14:35 -- nvmf/common.sh@57 -- # uname 00:24:05.480 16:14:35 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:05.480 16:14:35 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:05.480 16:14:35 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:05.481 16:14:35 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:05.481 16:14:35 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:05.481 16:14:35 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:05.481 16:14:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:05.481 16:14:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:05.481 16:14:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:05.481 16:14:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:05.481 16:14:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:05.481 16:14:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:05.481 16:14:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:05.481 16:14:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:05.481 16:14:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:05.481 16:14:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:05.481 16:14:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:05.481 16:14:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:05.481 16:14:36 -- nvmf/common.sh@104 -- # continue 2 00:24:05.481 16:14:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:05.481 16:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:05.481 16:14:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:05.481 16:14:36 -- nvmf/common.sh@104 -- # continue 2 00:24:05.481 16:14:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:05.481 16:14:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:05.481 16:14:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:05.481 16:14:36 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:05.481 16:14:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:05.481 16:14:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:05.481 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:05.481 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:05.481 altname enp217s0f0np0 00:24:05.481 altname ens818f0np0 00:24:05.481 inet 192.168.100.8/24 scope global mlx_0_0 00:24:05.481 valid_lft forever preferred_lft forever 00:24:05.481 16:14:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:05.481 16:14:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:05.481 16:14:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:05.481 16:14:36 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:05.481 16:14:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:05.481 16:14:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:05.481 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:05.481 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:05.481 altname enp217s0f1np1 00:24:05.481 altname ens818f1np1 00:24:05.481 inet 192.168.100.9/24 scope global mlx_0_1 00:24:05.481 valid_lft forever preferred_lft forever 00:24:05.481 16:14:36 -- nvmf/common.sh@410 -- # return 0 00:24:05.481 16:14:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:05.481 16:14:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:05.481 16:14:36 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:05.481 16:14:36 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:05.481 16:14:36 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:05.481 16:14:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:05.481 16:14:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:05.481 16:14:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:05.481 16:14:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:05.481 16:14:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:05.481 16:14:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:05.481 16:14:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:05.481 16:14:36 -- nvmf/common.sh@104 -- # continue 2 00:24:05.481 16:14:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:05.481 16:14:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:05.481 16:14:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:05.481 16:14:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:05.481 16:14:36 -- nvmf/common.sh@104 -- # continue 2 00:24:05.481 16:14:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:05.481 16:14:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:05.481 16:14:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:05.481 16:14:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:05.481 16:14:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:05.481 16:14:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:05.481 16:14:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:05.481 16:14:36 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:05.481 192.168.100.9' 00:24:05.481 16:14:36 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:05.481 192.168.100.9' 00:24:05.481 16:14:36 -- nvmf/common.sh@445 -- # head -n 1 00:24:05.481 16:14:36 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:05.481 16:14:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:05.481 192.168.100.9' 00:24:05.481 16:14:36 -- nvmf/common.sh@446 -- # tail -n +2 00:24:05.481 16:14:36 -- nvmf/common.sh@446 -- # head -n 1 00:24:05.481 16:14:36 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:05.481 16:14:36 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:05.481 16:14:36 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:05.481 16:14:36 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:05.481 16:14:36 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:05.481 16:14:36 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:05.481 16:14:36 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:05.481 16:14:36 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:05.481 16:14:36 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:05.481 16:14:36 -- common/autotest_common.sh@10 -- # set +x 00:24:05.481 16:14:36 -- nvmf/common.sh@469 -- # nvmfpid=1433152 00:24:05.481 16:14:36 -- nvmf/common.sh@470 -- # waitforlisten 1433152 00:24:05.481 16:14:36 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:05.481 16:14:36 -- common/autotest_common.sh@829 -- # '[' -z 1433152 ']' 00:24:05.481 16:14:36 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.481 16:14:36 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:05.481 16:14:36 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.481 16:14:36 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:05.481 16:14:36 -- common/autotest_common.sh@10 -- # set +x 00:24:05.481 [2024-11-20 16:14:36.245913] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:05.481 [2024-11-20 16:14:36.245963] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.481 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.741 [2024-11-20 16:14:36.316645] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.741 [2024-11-20 16:14:36.353806] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:05.741 [2024-11-20 16:14:36.353915] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.741 [2024-11-20 16:14:36.353925] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.741 [2024-11-20 16:14:36.353934] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.741 [2024-11-20 16:14:36.354039] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.741 [2024-11-20 16:14:36.354127] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.741 [2024-11-20 16:14:36.354234] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.741 [2024-11-20 16:14:36.354236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:06.309 16:14:37 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:06.309 16:14:37 -- common/autotest_common.sh@862 -- # return 0 00:24:06.309 16:14:37 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:06.310 16:14:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:06.310 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:24:06.569 16:14:37 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.569 16:14:37 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:06.569 16:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.569 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:24:06.569 [2024-11-20 16:14:37.148840] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9813c0/0x985890) succeed. 00:24:06.569 [2024-11-20 16:14:37.158008] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x982960/0x9c6f30) succeed. 00:24:06.569 16:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:06.569 16:14:37 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:06.569 16:14:37 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:06.569 16:14:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:06.569 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:24:06.569 16:14:37 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:06.569 16:14:37 -- target/shutdown.sh@28 -- # cat 00:24:06.569 16:14:37 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:06.569 16:14:37 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:06.569 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:24:06.569 Malloc1 00:24:06.828 [2024-11-20 16:14:37.379947] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:06.828 Malloc2 00:24:06.828 Malloc3 00:24:06.828 Malloc4 00:24:06.828 Malloc5 00:24:06.828 Malloc6 00:24:07.088 Malloc7 00:24:07.088 Malloc8 00:24:07.088 Malloc9 00:24:07.088 Malloc10 00:24:07.088 16:14:37 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:07.088 16:14:37 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:07.088 16:14:37 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:07.088 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.088 16:14:37 -- target/shutdown.sh@102 -- # perfpid=1433472 00:24:07.088 16:14:37 -- target/shutdown.sh@103 -- # waitforlisten 1433472 /var/tmp/bdevperf.sock 00:24:07.088 16:14:37 -- common/autotest_common.sh@829 -- # '[' -z 1433472 ']' 00:24:07.088 16:14:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.088 16:14:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:07.088 16:14:37 -- target/shutdown.sh@101 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:07.088 16:14:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.088 16:14:37 -- target/shutdown.sh@101 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:07.088 16:14:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:07.088 16:14:37 -- common/autotest_common.sh@10 -- # set +x 00:24:07.088 16:14:37 -- nvmf/common.sh@520 -- # config=() 00:24:07.088 16:14:37 -- nvmf/common.sh@520 -- # local subsystem config 00:24:07.088 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.088 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.088 { 00:24:07.088 "params": { 00:24:07.088 "name": "Nvme$subsystem", 00:24:07.088 "trtype": "$TEST_TRANSPORT", 00:24:07.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.088 "adrfam": "ipv4", 00:24:07.088 "trsvcid": "$NVMF_PORT", 00:24:07.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.088 "hdgst": ${hdgst:-false}, 00:24:07.088 "ddgst": ${ddgst:-false} 00:24:07.088 }, 00:24:07.088 "method": "bdev_nvme_attach_controller" 00:24:07.088 } 00:24:07.088 EOF 00:24:07.088 )") 00:24:07.088 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.088 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.088 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.088 { 00:24:07.088 "params": { 00:24:07.088 "name": "Nvme$subsystem", 00:24:07.088 "trtype": "$TEST_TRANSPORT", 00:24:07.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.088 "adrfam": "ipv4", 00:24:07.088 "trsvcid": "$NVMF_PORT", 00:24:07.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.088 "hdgst": ${hdgst:-false}, 00:24:07.088 "ddgst": ${ddgst:-false} 00:24:07.088 }, 00:24:07.088 "method": "bdev_nvme_attach_controller" 00:24:07.088 } 00:24:07.088 EOF 00:24:07.088 )") 00:24:07.088 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.088 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.088 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.088 { 00:24:07.088 "params": { 00:24:07.088 "name": "Nvme$subsystem", 00:24:07.088 "trtype": "$TEST_TRANSPORT", 00:24:07.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.088 "adrfam": "ipv4", 00:24:07.088 "trsvcid": "$NVMF_PORT", 00:24:07.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.088 "hdgst": ${hdgst:-false}, 00:24:07.088 "ddgst": ${ddgst:-false} 00:24:07.088 }, 00:24:07.088 "method": "bdev_nvme_attach_controller" 00:24:07.088 } 00:24:07.088 EOF 00:24:07.088 )") 00:24:07.088 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.088 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.088 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.088 { 00:24:07.088 "params": { 00:24:07.088 "name": "Nvme$subsystem", 00:24:07.088 "trtype": "$TEST_TRANSPORT", 00:24:07.088 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.088 "adrfam": "ipv4", 00:24:07.088 "trsvcid": "$NVMF_PORT", 00:24:07.088 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.088 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.089 "hdgst": ${hdgst:-false}, 00:24:07.089 "ddgst": ${ddgst:-false} 00:24:07.089 }, 00:24:07.089 "method": "bdev_nvme_attach_controller" 00:24:07.089 } 00:24:07.089 EOF 00:24:07.089 )") 00:24:07.089 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.089 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.089 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.089 { 00:24:07.089 "params": { 00:24:07.089 "name": "Nvme$subsystem", 00:24:07.089 "trtype": "$TEST_TRANSPORT", 00:24:07.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.089 "adrfam": "ipv4", 00:24:07.089 "trsvcid": "$NVMF_PORT", 00:24:07.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.089 "hdgst": ${hdgst:-false}, 00:24:07.089 "ddgst": ${ddgst:-false} 00:24:07.089 }, 00:24:07.089 "method": "bdev_nvme_attach_controller" 00:24:07.089 } 00:24:07.089 EOF 00:24:07.089 )") 00:24:07.089 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.089 [2024-11-20 16:14:37.870602] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:07.089 [2024-11-20 16:14:37.870656] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1433472 ] 00:24:07.089 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.089 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.089 { 00:24:07.089 "params": { 00:24:07.089 "name": "Nvme$subsystem", 00:24:07.089 "trtype": "$TEST_TRANSPORT", 00:24:07.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.089 "adrfam": "ipv4", 00:24:07.089 "trsvcid": "$NVMF_PORT", 00:24:07.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.089 "hdgst": ${hdgst:-false}, 00:24:07.089 "ddgst": ${ddgst:-false} 00:24:07.089 }, 00:24:07.089 "method": "bdev_nvme_attach_controller" 00:24:07.089 } 00:24:07.089 EOF 00:24:07.089 )") 00:24:07.089 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.089 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.089 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.089 { 00:24:07.089 "params": { 00:24:07.089 "name": "Nvme$subsystem", 00:24:07.089 "trtype": "$TEST_TRANSPORT", 00:24:07.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.089 "adrfam": "ipv4", 00:24:07.089 "trsvcid": "$NVMF_PORT", 00:24:07.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.089 "hdgst": ${hdgst:-false}, 00:24:07.089 "ddgst": ${ddgst:-false} 00:24:07.089 }, 00:24:07.089 "method": "bdev_nvme_attach_controller" 00:24:07.089 } 00:24:07.089 EOF 00:24:07.089 )") 00:24:07.089 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.089 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.089 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.089 { 00:24:07.089 "params": { 00:24:07.089 "name": "Nvme$subsystem", 00:24:07.089 "trtype": "$TEST_TRANSPORT", 00:24:07.089 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.089 "adrfam": "ipv4", 00:24:07.089 "trsvcid": "$NVMF_PORT", 00:24:07.089 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.089 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.089 "hdgst": ${hdgst:-false}, 00:24:07.089 "ddgst": ${ddgst:-false} 00:24:07.089 }, 00:24:07.089 "method": "bdev_nvme_attach_controller" 00:24:07.089 } 00:24:07.089 EOF 00:24:07.089 )") 00:24:07.348 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.348 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.348 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.348 { 00:24:07.348 "params": { 00:24:07.348 "name": "Nvme$subsystem", 00:24:07.348 "trtype": "$TEST_TRANSPORT", 00:24:07.348 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.348 "adrfam": "ipv4", 00:24:07.348 "trsvcid": "$NVMF_PORT", 00:24:07.348 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.348 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.348 "hdgst": ${hdgst:-false}, 00:24:07.348 "ddgst": ${ddgst:-false} 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 } 00:24:07.349 EOF 00:24:07.349 )") 00:24:07.349 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.349 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.349 16:14:37 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:07.349 16:14:37 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:07.349 { 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme$subsystem", 00:24:07.349 "trtype": "$TEST_TRANSPORT", 00:24:07.349 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "$NVMF_PORT", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.349 "hdgst": ${hdgst:-false}, 00:24:07.349 "ddgst": ${ddgst:-false} 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 } 00:24:07.349 EOF 00:24:07.349 )") 00:24:07.349 16:14:37 -- nvmf/common.sh@542 -- # cat 00:24:07.349 16:14:37 -- nvmf/common.sh@544 -- # jq . 00:24:07.349 16:14:37 -- nvmf/common.sh@545 -- # IFS=, 00:24:07.349 16:14:37 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme1", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 },{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme2", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 },{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme3", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 },{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme4", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 },{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme5", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 },{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme6", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 },{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme7", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 },{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme8", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 },{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme9", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 },{ 00:24:07.349 "params": { 00:24:07.349 "name": "Nvme10", 00:24:07.349 "trtype": "rdma", 00:24:07.349 "traddr": "192.168.100.8", 00:24:07.349 "adrfam": "ipv4", 00:24:07.349 "trsvcid": "4420", 00:24:07.349 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:07.349 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:07.349 "hdgst": false, 00:24:07.349 "ddgst": false 00:24:07.349 }, 00:24:07.349 "method": "bdev_nvme_attach_controller" 00:24:07.349 }' 00:24:07.349 [2024-11-20 16:14:37.943783] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.349 [2024-11-20 16:14:37.980466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.288 Running I/O for 10 seconds... 00:24:08.857 16:14:39 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:08.857 16:14:39 -- common/autotest_common.sh@862 -- # return 0 00:24:08.857 16:14:39 -- target/shutdown.sh@104 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:08.857 16:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.857 16:14:39 -- common/autotest_common.sh@10 -- # set +x 00:24:08.857 16:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.857 16:14:39 -- target/shutdown.sh@106 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:08.857 16:14:39 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:08.857 16:14:39 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:08.857 16:14:39 -- target/shutdown.sh@57 -- # local ret=1 00:24:08.857 16:14:39 -- target/shutdown.sh@58 -- # local i 00:24:08.857 16:14:39 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:08.857 16:14:39 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:08.857 16:14:39 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:08.857 16:14:39 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:08.857 16:14:39 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:08.857 16:14:39 -- common/autotest_common.sh@10 -- # set +x 00:24:08.857 16:14:39 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:08.857 16:14:39 -- target/shutdown.sh@60 -- # read_io_count=491 00:24:08.857 16:14:39 -- target/shutdown.sh@63 -- # '[' 491 -ge 100 ']' 00:24:08.857 16:14:39 -- target/shutdown.sh@64 -- # ret=0 00:24:08.857 16:14:39 -- target/shutdown.sh@65 -- # break 00:24:08.857 16:14:39 -- target/shutdown.sh@69 -- # return 0 00:24:08.857 16:14:39 -- target/shutdown.sh@109 -- # killprocess 1433472 00:24:08.857 16:14:39 -- common/autotest_common.sh@936 -- # '[' -z 1433472 ']' 00:24:08.857 16:14:39 -- common/autotest_common.sh@940 -- # kill -0 1433472 00:24:08.857 16:14:39 -- common/autotest_common.sh@941 -- # uname 00:24:08.857 16:14:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:08.857 16:14:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1433472 00:24:09.117 16:14:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:09.117 16:14:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:09.117 16:14:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1433472' 00:24:09.117 killing process with pid 1433472 00:24:09.117 16:14:39 -- common/autotest_common.sh@955 -- # kill 1433472 00:24:09.117 16:14:39 -- common/autotest_common.sh@960 -- # wait 1433472 00:24:09.117 Received shutdown signal, test time was about 0.945325 seconds 00:24:09.117 00:24:09.117 Latency(us) 00:24:09.117 [2024-11-20T15:14:39.922Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme1n1 : 0.94 727.65 45.48 0.00 0.00 86879.56 7602.18 121634.82 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme2n1 : 0.94 738.64 46.16 0.00 0.00 84757.37 7811.89 75497.47 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme3n1 : 0.94 737.89 46.12 0.00 0.00 84265.20 7969.18 73819.75 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme4n1 : 0.94 737.13 46.07 0.00 0.00 83788.65 8126.46 72561.46 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme5n1 : 0.94 736.38 46.02 0.00 0.00 83293.15 8231.32 71303.17 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme6n1 : 0.94 735.64 45.98 0.00 0.00 82783.64 8336.18 70883.74 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme7n1 : 0.94 734.89 45.93 0.00 0.00 82279.96 8493.47 72561.46 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme8n1 : 0.94 734.15 45.88 0.00 0.00 81765.90 8598.32 73819.75 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme9n1 : 0.94 733.41 45.84 0.00 0.00 81273.96 8755.61 75497.47 00:24:09.117 [2024-11-20T15:14:39.922Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.117 Verification LBA range: start 0x0 length 0x400 00:24:09.117 Nvme10n1 : 0.94 511.36 31.96 0.00 0.00 115690.66 7811.89 333866.60 00:24:09.117 [2024-11-20T15:14:39.922Z] =================================================================================================================== 00:24:09.117 [2024-11-20T15:14:39.922Z] Total : 7127.14 445.45 0.00 0.00 85772.44 7602.18 333866.60 00:24:09.376 16:14:40 -- target/shutdown.sh@112 -- # sleep 1 00:24:10.373 16:14:41 -- target/shutdown.sh@113 -- # kill -0 1433152 00:24:10.373 16:14:41 -- target/shutdown.sh@115 -- # stoptarget 00:24:10.373 16:14:41 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:10.373 16:14:41 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:10.373 16:14:41 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:10.373 16:14:41 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:10.373 16:14:41 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:10.373 16:14:41 -- nvmf/common.sh@116 -- # sync 00:24:10.373 16:14:41 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:10.373 16:14:41 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:10.373 16:14:41 -- nvmf/common.sh@119 -- # set +e 00:24:10.373 16:14:41 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:10.373 16:14:41 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:10.373 rmmod nvme_rdma 00:24:10.373 rmmod nvme_fabrics 00:24:10.373 16:14:41 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:10.373 16:14:41 -- nvmf/common.sh@123 -- # set -e 00:24:10.373 16:14:41 -- nvmf/common.sh@124 -- # return 0 00:24:10.373 16:14:41 -- nvmf/common.sh@477 -- # '[' -n 1433152 ']' 00:24:10.373 16:14:41 -- nvmf/common.sh@478 -- # killprocess 1433152 00:24:10.373 16:14:41 -- common/autotest_common.sh@936 -- # '[' -z 1433152 ']' 00:24:10.373 16:14:41 -- common/autotest_common.sh@940 -- # kill -0 1433152 00:24:10.373 16:14:41 -- common/autotest_common.sh@941 -- # uname 00:24:10.373 16:14:41 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:10.373 16:14:41 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1433152 00:24:10.632 16:14:41 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:10.632 16:14:41 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:10.632 16:14:41 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1433152' 00:24:10.632 killing process with pid 1433152 00:24:10.632 16:14:41 -- common/autotest_common.sh@955 -- # kill 1433152 00:24:10.632 16:14:41 -- common/autotest_common.sh@960 -- # wait 1433152 00:24:10.892 16:14:41 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:10.892 16:14:41 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:10.892 00:24:10.892 real 0m5.704s 00:24:10.892 user 0m23.158s 00:24:10.892 sys 0m1.229s 00:24:10.892 16:14:41 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:10.892 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:24:10.892 ************************************ 00:24:10.892 END TEST nvmf_shutdown_tc2 00:24:10.892 ************************************ 00:24:10.892 16:14:41 -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:10.892 16:14:41 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:24:10.892 16:14:41 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:10.892 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:24:10.892 ************************************ 00:24:10.892 START TEST nvmf_shutdown_tc3 00:24:10.892 ************************************ 00:24:10.892 16:14:41 -- common/autotest_common.sh@1114 -- # nvmf_shutdown_tc3 00:24:10.892 16:14:41 -- target/shutdown.sh@120 -- # starttarget 00:24:10.892 16:14:41 -- target/shutdown.sh@15 -- # nvmftestinit 00:24:10.892 16:14:41 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:10.892 16:14:41 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:10.892 16:14:41 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:10.892 16:14:41 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:10.892 16:14:41 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:10.892 16:14:41 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:10.892 16:14:41 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:10.892 16:14:41 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.151 16:14:41 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:11.151 16:14:41 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:11.151 16:14:41 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:11.151 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.151 16:14:41 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:11.151 16:14:41 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:11.151 16:14:41 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:11.151 16:14:41 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:11.151 16:14:41 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:11.151 16:14:41 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:11.151 16:14:41 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:11.151 16:14:41 -- nvmf/common.sh@294 -- # net_devs=() 00:24:11.151 16:14:41 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:11.151 16:14:41 -- nvmf/common.sh@295 -- # e810=() 00:24:11.151 16:14:41 -- nvmf/common.sh@295 -- # local -ga e810 00:24:11.151 16:14:41 -- nvmf/common.sh@296 -- # x722=() 00:24:11.151 16:14:41 -- nvmf/common.sh@296 -- # local -ga x722 00:24:11.151 16:14:41 -- nvmf/common.sh@297 -- # mlx=() 00:24:11.152 16:14:41 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:11.152 16:14:41 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:11.152 16:14:41 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:11.152 16:14:41 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:11.152 16:14:41 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:11.152 16:14:41 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:11.152 16:14:41 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:11.152 16:14:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:11.152 16:14:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:11.152 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:11.152 16:14:41 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:11.152 16:14:41 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:11.152 16:14:41 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:11.152 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:11.152 16:14:41 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:11.152 16:14:41 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:11.152 16:14:41 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:11.152 16:14:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.152 16:14:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:11.152 16:14:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.152 16:14:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:11.152 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:11.152 16:14:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.152 16:14:41 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:11.152 16:14:41 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:11.152 16:14:41 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:11.152 16:14:41 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:11.152 16:14:41 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:11.152 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:11.152 16:14:41 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:11.152 16:14:41 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:11.152 16:14:41 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:11.152 16:14:41 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:11.152 16:14:41 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:11.152 16:14:41 -- nvmf/common.sh@57 -- # uname 00:24:11.152 16:14:41 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:11.152 16:14:41 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:11.152 16:14:41 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:11.152 16:14:41 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:11.152 16:14:41 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:11.152 16:14:41 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:11.152 16:14:41 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:11.152 16:14:41 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:11.152 16:14:41 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:11.152 16:14:41 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:11.152 16:14:41 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:11.152 16:14:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:11.152 16:14:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:11.152 16:14:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:11.152 16:14:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:11.152 16:14:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:11.152 16:14:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:11.152 16:14:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.152 16:14:41 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:11.152 16:14:41 -- nvmf/common.sh@104 -- # continue 2 00:24:11.152 16:14:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:11.152 16:14:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.152 16:14:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.152 16:14:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:11.152 16:14:41 -- nvmf/common.sh@104 -- # continue 2 00:24:11.152 16:14:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:11.152 16:14:41 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:11.152 16:14:41 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:11.152 16:14:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:11.152 16:14:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:11.152 16:14:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:11.152 16:14:41 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:11.152 16:14:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:11.152 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:11.152 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:11.152 altname enp217s0f0np0 00:24:11.152 altname ens818f0np0 00:24:11.152 inet 192.168.100.8/24 scope global mlx_0_0 00:24:11.152 valid_lft forever preferred_lft forever 00:24:11.152 16:14:41 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:11.152 16:14:41 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:11.152 16:14:41 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:11.152 16:14:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:11.152 16:14:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:11.152 16:14:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:11.152 16:14:41 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:11.152 16:14:41 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:11.152 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:11.152 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:11.152 altname enp217s0f1np1 00:24:11.152 altname ens818f1np1 00:24:11.152 inet 192.168.100.9/24 scope global mlx_0_1 00:24:11.152 valid_lft forever preferred_lft forever 00:24:11.152 16:14:41 -- nvmf/common.sh@410 -- # return 0 00:24:11.152 16:14:41 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:11.152 16:14:41 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:11.152 16:14:41 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:11.152 16:14:41 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:11.152 16:14:41 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:11.153 16:14:41 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:11.153 16:14:41 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:11.153 16:14:41 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:11.153 16:14:41 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:11.153 16:14:41 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:11.153 16:14:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:11.153 16:14:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.153 16:14:41 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:11.153 16:14:41 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:11.153 16:14:41 -- nvmf/common.sh@104 -- # continue 2 00:24:11.153 16:14:41 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:11.153 16:14:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.153 16:14:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:11.153 16:14:41 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:11.153 16:14:41 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:11.153 16:14:41 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:11.153 16:14:41 -- nvmf/common.sh@104 -- # continue 2 00:24:11.153 16:14:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:11.153 16:14:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:11.153 16:14:41 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:11.153 16:14:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:11.153 16:14:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:11.153 16:14:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:11.153 16:14:41 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:11.153 16:14:41 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:11.153 16:14:41 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:11.153 16:14:41 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:11.153 16:14:41 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:11.153 16:14:41 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:11.153 16:14:41 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:11.153 192.168.100.9' 00:24:11.153 16:14:41 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:11.153 192.168.100.9' 00:24:11.153 16:14:41 -- nvmf/common.sh@445 -- # head -n 1 00:24:11.153 16:14:41 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:11.153 16:14:41 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:11.153 192.168.100.9' 00:24:11.153 16:14:41 -- nvmf/common.sh@446 -- # tail -n +2 00:24:11.153 16:14:41 -- nvmf/common.sh@446 -- # head -n 1 00:24:11.153 16:14:41 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:11.153 16:14:41 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:11.153 16:14:41 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:11.153 16:14:41 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:11.153 16:14:41 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:11.153 16:14:41 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:11.153 16:14:41 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:24:11.153 16:14:41 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:11.153 16:14:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:11.153 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.412 16:14:41 -- nvmf/common.sh@469 -- # nvmfpid=1434389 00:24:11.412 16:14:41 -- nvmf/common.sh@470 -- # waitforlisten 1434389 00:24:11.412 16:14:41 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:11.412 16:14:41 -- common/autotest_common.sh@829 -- # '[' -z 1434389 ']' 00:24:11.412 16:14:41 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:11.412 16:14:41 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.412 16:14:41 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:11.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:11.412 16:14:41 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.412 16:14:41 -- common/autotest_common.sh@10 -- # set +x 00:24:11.412 [2024-11-20 16:14:42.005503] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:11.412 [2024-11-20 16:14:42.005560] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:11.412 EAL: No free 2048 kB hugepages reported on node 1 00:24:11.412 [2024-11-20 16:14:42.074820] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:11.412 [2024-11-20 16:14:42.111993] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:11.412 [2024-11-20 16:14:42.112102] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:11.412 [2024-11-20 16:14:42.112112] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:11.412 [2024-11-20 16:14:42.112121] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:11.412 [2024-11-20 16:14:42.112230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:11.412 [2024-11-20 16:14:42.112317] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:11.412 [2024-11-20 16:14:42.112416] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:11.412 [2024-11-20 16:14:42.112418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:12.350 16:14:42 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.350 16:14:42 -- common/autotest_common.sh@862 -- # return 0 00:24:12.350 16:14:42 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:12.350 16:14:42 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.350 16:14:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.350 16:14:42 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:12.350 16:14:42 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:12.350 16:14:42 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.350 16:14:42 -- common/autotest_common.sh@10 -- # set +x 00:24:12.350 [2024-11-20 16:14:42.895362] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x6f93c0/0x6fd890) succeed. 00:24:12.350 [2024-11-20 16:14:42.904687] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x6fa960/0x73ef30) succeed. 00:24:12.350 16:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.350 16:14:43 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:24:12.350 16:14:43 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:24:12.350 16:14:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:12.350 16:14:43 -- common/autotest_common.sh@10 -- # set +x 00:24:12.350 16:14:43 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:24:12.350 16:14:43 -- target/shutdown.sh@28 -- # cat 00:24:12.350 16:14:43 -- target/shutdown.sh@35 -- # rpc_cmd 00:24:12.350 16:14:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.350 16:14:43 -- common/autotest_common.sh@10 -- # set +x 00:24:12.350 Malloc1 00:24:12.350 [2024-11-20 16:14:43.130229] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:12.350 Malloc2 00:24:12.610 Malloc3 00:24:12.610 Malloc4 00:24:12.610 Malloc5 00:24:12.610 Malloc6 00:24:12.610 Malloc7 00:24:12.870 Malloc8 00:24:12.870 Malloc9 00:24:12.870 Malloc10 00:24:12.870 16:14:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.870 16:14:43 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:24:12.870 16:14:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:12.870 16:14:43 -- common/autotest_common.sh@10 -- # set +x 00:24:12.870 16:14:43 -- target/shutdown.sh@124 -- # perfpid=1434707 00:24:12.870 16:14:43 -- target/shutdown.sh@125 -- # waitforlisten 1434707 /var/tmp/bdevperf.sock 00:24:12.870 16:14:43 -- common/autotest_common.sh@829 -- # '[' -z 1434707 ']' 00:24:12.870 16:14:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:12.870 16:14:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.870 16:14:43 -- target/shutdown.sh@123 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:12.870 16:14:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:12.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:12.870 16:14:43 -- target/shutdown.sh@123 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:12.870 16:14:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.870 16:14:43 -- nvmf/common.sh@520 -- # config=() 00:24:12.870 16:14:43 -- common/autotest_common.sh@10 -- # set +x 00:24:12.870 16:14:43 -- nvmf/common.sh@520 -- # local subsystem config 00:24:12.870 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.870 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.870 { 00:24:12.870 "params": { 00:24:12.870 "name": "Nvme$subsystem", 00:24:12.870 "trtype": "$TEST_TRANSPORT", 00:24:12.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.870 "adrfam": "ipv4", 00:24:12.870 "trsvcid": "$NVMF_PORT", 00:24:12.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.870 "hdgst": ${hdgst:-false}, 00:24:12.870 "ddgst": ${ddgst:-false} 00:24:12.870 }, 00:24:12.870 "method": "bdev_nvme_attach_controller" 00:24:12.870 } 00:24:12.870 EOF 00:24:12.870 )") 00:24:12.870 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.870 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.870 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.870 { 00:24:12.870 "params": { 00:24:12.870 "name": "Nvme$subsystem", 00:24:12.870 "trtype": "$TEST_TRANSPORT", 00:24:12.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.870 "adrfam": "ipv4", 00:24:12.870 "trsvcid": "$NVMF_PORT", 00:24:12.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.870 "hdgst": ${hdgst:-false}, 00:24:12.870 "ddgst": ${ddgst:-false} 00:24:12.870 }, 00:24:12.870 "method": "bdev_nvme_attach_controller" 00:24:12.870 } 00:24:12.870 EOF 00:24:12.870 )") 00:24:12.870 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.870 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.870 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.870 { 00:24:12.870 "params": { 00:24:12.870 "name": "Nvme$subsystem", 00:24:12.870 "trtype": "$TEST_TRANSPORT", 00:24:12.870 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.870 "adrfam": "ipv4", 00:24:12.870 "trsvcid": "$NVMF_PORT", 00:24:12.870 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.870 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.870 "hdgst": ${hdgst:-false}, 00:24:12.870 "ddgst": ${ddgst:-false} 00:24:12.870 }, 00:24:12.870 "method": "bdev_nvme_attach_controller" 00:24:12.870 } 00:24:12.870 EOF 00:24:12.870 )") 00:24:12.870 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.870 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.871 { 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme$subsystem", 00:24:12.871 "trtype": "$TEST_TRANSPORT", 00:24:12.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "$NVMF_PORT", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.871 "hdgst": ${hdgst:-false}, 00:24:12.871 "ddgst": ${ddgst:-false} 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 } 00:24:12.871 EOF 00:24:12.871 )") 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.871 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.871 { 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme$subsystem", 00:24:12.871 "trtype": "$TEST_TRANSPORT", 00:24:12.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "$NVMF_PORT", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.871 "hdgst": ${hdgst:-false}, 00:24:12.871 "ddgst": ${ddgst:-false} 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 } 00:24:12.871 EOF 00:24:12.871 )") 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.871 [2024-11-20 16:14:43.611601] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:12.871 [2024-11-20 16:14:43.611655] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1434707 ] 00:24:12.871 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.871 { 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme$subsystem", 00:24:12.871 "trtype": "$TEST_TRANSPORT", 00:24:12.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "$NVMF_PORT", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.871 "hdgst": ${hdgst:-false}, 00:24:12.871 "ddgst": ${ddgst:-false} 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 } 00:24:12.871 EOF 00:24:12.871 )") 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.871 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.871 { 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme$subsystem", 00:24:12.871 "trtype": "$TEST_TRANSPORT", 00:24:12.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "$NVMF_PORT", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.871 "hdgst": ${hdgst:-false}, 00:24:12.871 "ddgst": ${ddgst:-false} 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 } 00:24:12.871 EOF 00:24:12.871 )") 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.871 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.871 { 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme$subsystem", 00:24:12.871 "trtype": "$TEST_TRANSPORT", 00:24:12.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "$NVMF_PORT", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.871 "hdgst": ${hdgst:-false}, 00:24:12.871 "ddgst": ${ddgst:-false} 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 } 00:24:12.871 EOF 00:24:12.871 )") 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.871 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.871 { 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme$subsystem", 00:24:12.871 "trtype": "$TEST_TRANSPORT", 00:24:12.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "$NVMF_PORT", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.871 "hdgst": ${hdgst:-false}, 00:24:12.871 "ddgst": ${ddgst:-false} 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 } 00:24:12.871 EOF 00:24:12.871 )") 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.871 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.871 16:14:43 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:12.871 { 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme$subsystem", 00:24:12.871 "trtype": "$TEST_TRANSPORT", 00:24:12.871 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "$NVMF_PORT", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:12.871 "hdgst": ${hdgst:-false}, 00:24:12.871 "ddgst": ${ddgst:-false} 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 } 00:24:12.871 EOF 00:24:12.871 )") 00:24:12.871 16:14:43 -- nvmf/common.sh@542 -- # cat 00:24:12.871 16:14:43 -- nvmf/common.sh@544 -- # jq . 00:24:12.871 16:14:43 -- nvmf/common.sh@545 -- # IFS=, 00:24:12.871 16:14:43 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme1", 00:24:12.871 "trtype": "rdma", 00:24:12.871 "traddr": "192.168.100.8", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "4420", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:12.871 "hdgst": false, 00:24:12.871 "ddgst": false 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 },{ 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme2", 00:24:12.871 "trtype": "rdma", 00:24:12.871 "traddr": "192.168.100.8", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "4420", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:12.871 "hdgst": false, 00:24:12.871 "ddgst": false 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 },{ 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme3", 00:24:12.871 "trtype": "rdma", 00:24:12.871 "traddr": "192.168.100.8", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "4420", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:12.871 "hdgst": false, 00:24:12.871 "ddgst": false 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 },{ 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme4", 00:24:12.871 "trtype": "rdma", 00:24:12.871 "traddr": "192.168.100.8", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "4420", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:12.871 "hdgst": false, 00:24:12.871 "ddgst": false 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 },{ 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme5", 00:24:12.871 "trtype": "rdma", 00:24:12.871 "traddr": "192.168.100.8", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "4420", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:12.871 "hdgst": false, 00:24:12.871 "ddgst": false 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 },{ 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme6", 00:24:12.871 "trtype": "rdma", 00:24:12.871 "traddr": "192.168.100.8", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "4420", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:12.871 "hdgst": false, 00:24:12.871 "ddgst": false 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 },{ 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme7", 00:24:12.871 "trtype": "rdma", 00:24:12.871 "traddr": "192.168.100.8", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "4420", 00:24:12.871 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:12.871 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:12.871 "hdgst": false, 00:24:12.871 "ddgst": false 00:24:12.871 }, 00:24:12.871 "method": "bdev_nvme_attach_controller" 00:24:12.871 },{ 00:24:12.871 "params": { 00:24:12.871 "name": "Nvme8", 00:24:12.871 "trtype": "rdma", 00:24:12.871 "traddr": "192.168.100.8", 00:24:12.871 "adrfam": "ipv4", 00:24:12.871 "trsvcid": "4420", 00:24:12.872 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:12.872 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:12.872 "hdgst": false, 00:24:12.872 "ddgst": false 00:24:12.872 }, 00:24:12.872 "method": "bdev_nvme_attach_controller" 00:24:12.872 },{ 00:24:12.872 "params": { 00:24:12.872 "name": "Nvme9", 00:24:12.872 "trtype": "rdma", 00:24:12.872 "traddr": "192.168.100.8", 00:24:12.872 "adrfam": "ipv4", 00:24:12.872 "trsvcid": "4420", 00:24:12.872 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:12.872 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:12.872 "hdgst": false, 00:24:12.872 "ddgst": false 00:24:12.872 }, 00:24:12.872 "method": "bdev_nvme_attach_controller" 00:24:12.872 },{ 00:24:12.872 "params": { 00:24:12.872 "name": "Nvme10", 00:24:12.872 "trtype": "rdma", 00:24:12.872 "traddr": "192.168.100.8", 00:24:12.872 "adrfam": "ipv4", 00:24:12.872 "trsvcid": "4420", 00:24:12.872 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:12.872 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:12.872 "hdgst": false, 00:24:12.872 "ddgst": false 00:24:12.872 }, 00:24:12.872 "method": "bdev_nvme_attach_controller" 00:24:12.872 }' 00:24:13.131 [2024-11-20 16:14:43.684724] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.131 [2024-11-20 16:14:43.720896] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.070 Running I/O for 10 seconds... 00:24:14.638 16:14:45 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.638 16:14:45 -- common/autotest_common.sh@862 -- # return 0 00:24:14.638 16:14:45 -- target/shutdown.sh@126 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:14.638 16:14:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.638 16:14:45 -- common/autotest_common.sh@10 -- # set +x 00:24:14.638 16:14:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.638 16:14:45 -- target/shutdown.sh@129 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.638 16:14:45 -- target/shutdown.sh@131 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:14.639 16:14:45 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:14.639 16:14:45 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:14.639 16:14:45 -- target/shutdown.sh@57 -- # local ret=1 00:24:14.639 16:14:45 -- target/shutdown.sh@58 -- # local i 00:24:14.639 16:14:45 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:14.639 16:14:45 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:14.639 16:14:45 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:14.639 16:14:45 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:14.639 16:14:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.639 16:14:45 -- common/autotest_common.sh@10 -- # set +x 00:24:14.639 16:14:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.639 16:14:45 -- target/shutdown.sh@60 -- # read_io_count=491 00:24:14.639 16:14:45 -- target/shutdown.sh@63 -- # '[' 491 -ge 100 ']' 00:24:14.639 16:14:45 -- target/shutdown.sh@64 -- # ret=0 00:24:14.639 16:14:45 -- target/shutdown.sh@65 -- # break 00:24:14.639 16:14:45 -- target/shutdown.sh@69 -- # return 0 00:24:14.639 16:14:45 -- target/shutdown.sh@134 -- # killprocess 1434389 00:24:14.639 16:14:45 -- common/autotest_common.sh@936 -- # '[' -z 1434389 ']' 00:24:14.639 16:14:45 -- common/autotest_common.sh@940 -- # kill -0 1434389 00:24:14.639 16:14:45 -- common/autotest_common.sh@941 -- # uname 00:24:14.639 16:14:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:14.639 16:14:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1434389 00:24:14.898 16:14:45 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:24:14.898 16:14:45 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:24:14.898 16:14:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1434389' 00:24:14.898 killing process with pid 1434389 00:24:14.898 16:14:45 -- common/autotest_common.sh@955 -- # kill 1434389 00:24:14.898 16:14:45 -- common/autotest_common.sh@960 -- # wait 1434389 00:24:15.158 16:14:45 -- target/shutdown.sh@135 -- # nvmfpid= 00:24:15.158 16:14:45 -- target/shutdown.sh@138 -- # sleep 1 00:24:15.726 [2024-11-20 16:14:46.508122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.508162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:78c8 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.508175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.508185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:78c8 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.508195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.508209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:78c8 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.508218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.508226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:78c8 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.510636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.726 [2024-11-20 16:14:46.510685] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:15.726 [2024-11-20 16:14:46.510753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.510786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:b238 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.510818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.510831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:b238 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.510845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.510857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:b238 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.510870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.510883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:b238 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.513204] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.726 [2024-11-20 16:14:46.513247] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:15.726 [2024-11-20 16:14:46.513297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.513330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:8b80 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.513363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.513395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:8b80 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.513427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.726 [2024-11-20 16:14:46.513458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:8b80 p:0 m:0 dnr:0 00:24:15.726 [2024-11-20 16:14:46.513490] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.513533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:8b80 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.515489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.727 [2024-11-20 16:14:46.515539] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:15.727 [2024-11-20 16:14:46.515600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.515634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:32ae p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.515666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.515698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:32ae p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.515730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.515762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:32ae p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.515793] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.515824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:32ae p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.517943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.727 [2024-11-20 16:14:46.517984] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:15.727 [2024-11-20 16:14:46.518036] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.518069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:e8fa p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.518103] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.518134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:e8fa p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.518166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.518197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:e8fa p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.518228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.518260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:e8fa p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.520423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.727 [2024-11-20 16:14:46.520441] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:15.727 [2024-11-20 16:14:46.520462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.520476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:4c18 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.520489] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.520502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:4c18 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.520515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.520548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:4c18 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.520566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.520578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:4c18 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.522692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.727 [2024-11-20 16:14:46.522733] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:15.727 [2024-11-20 16:14:46.522781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.522813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:745c p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.522853] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.522865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:745c p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.522878] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.522891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:745c p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.522904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.522916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:745c p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.525224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.727 [2024-11-20 16:14:46.525242] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:15.727 [2024-11-20 16:14:46.525262] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.525275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:f430 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.525289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.525301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:f430 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.525314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.525327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:f430 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.525340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.525352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:f430 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.527500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.727 [2024-11-20 16:14:46.527540] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:15.727 [2024-11-20 16:14:46.527561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.527579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:77ae p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.527593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.527605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:77ae p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.527618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.527630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:77ae p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.527644] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.527656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:77ae p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.529302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.727 [2024-11-20 16:14:46.529320] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:15.727 [2024-11-20 16:14:46.529340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.529355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:2918 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.529368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.529380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:2918 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.529393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.529406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:2918 p:0 m:0 dnr:0 00:24:15.727 [2024-11-20 16:14:46.529420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:15.727 [2024-11-20 16:14:46.529432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:53966 cdw0:b26b48b0 sqhd:2918 p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.531671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:15.998 [2024-11-20 16:14:46.531719] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:15.998 [2024-11-20 16:14:46.531742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001907fc00 len:0x10000 key:0x181400 00:24:15.998 [2024-11-20 16:14:46.531756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:79c4 p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.531792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138da900 len:0x10000 key:0x181500 00:24:15.998 [2024-11-20 16:14:46.531806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:79c4 p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.531825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001381a300 len:0x10000 key:0x181500 00:24:15.998 [2024-11-20 16:14:46.531838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:79c4 p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.531859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200018e6fb80 len:0x10000 key:0x181d00 00:24:15.998 [2024-11-20 16:14:46.531872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:79c4 p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.531890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa780 len:0x10000 key:0x181500 00:24:15.998 [2024-11-20 16:14:46.531903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:79c4 p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.534831] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019283280 was disconnected and freed. reset controller. 00:24:15.998 [2024-11-20 16:14:46.534864] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:15.998 [2024-11-20 16:14:46.536027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e0e9c0 len:0x10000 key:0x184300 00:24:15.998 [2024-11-20 16:14:46.536047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b29fb00 len:0x10000 key:0x183e00 00:24:15.998 [2024-11-20 16:14:46.536083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000009df300 len:0x10000 key:0x183b00 00:24:15.998 [2024-11-20 16:14:46.536113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002774c0 len:0x10000 key:0x183f00 00:24:15.998 [2024-11-20 16:14:46.536145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000704f280 len:0x10000 key:0x183900 00:24:15.998 [2024-11-20 16:14:46.536177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000713fa00 len:0x10000 key:0x183900 00:24:15.998 [2024-11-20 16:14:46.536208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002975c0 len:0x10000 key:0x183f00 00:24:15.998 [2024-11-20 16:14:46.536239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b25f900 len:0x10000 key:0x183e00 00:24:15.998 [2024-11-20 16:14:46.536271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2dfd00 len:0x10000 key:0x183e00 00:24:15.998 [2024-11-20 16:14:46.536306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000267440 len:0x10000 key:0x183f00 00:24:15.998 [2024-11-20 16:14:46.536337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000071eff80 len:0x10000 key:0x183900 00:24:15.998 [2024-11-20 16:14:46.536368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.998 [2024-11-20 16:14:46.536385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b27fa00 len:0x10000 key:0x183e00 00:24:15.998 [2024-11-20 16:14:46.536398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b20f680 len:0x10000 key:0x183e00 00:24:15.999 [2024-11-20 16:14:46.536430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000712f980 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.536460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2afb80 len:0x10000 key:0x183e00 00:24:15.999 [2024-11-20 16:14:46.536491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000247340 len:0x10000 key:0x183f00 00:24:15.999 [2024-11-20 16:14:46.536528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000707f400 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.536559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070af580 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.536590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b26f980 len:0x10000 key:0x183e00 00:24:15.999 [2024-11-20 16:14:46.536622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000701f100 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.536655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b21f700 len:0x10000 key:0x183e00 00:24:15.999 [2024-11-20 16:14:46.536686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e2eac0 len:0x10000 key:0x184300 00:24:15.999 [2024-11-20 16:14:46.536717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000009ef380 len:0x10000 key:0x183b00 00:24:15.999 [2024-11-20 16:14:46.536748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000070bf600 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.536778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2cfc80 len:0x10000 key:0x183e00 00:24:15.999 [2024-11-20 16:14:46.536809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000002573c0 len:0x10000 key:0x183f00 00:24:15.999 [2024-11-20 16:14:46.536839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200003e1ea40 len:0x10000 key:0x184300 00:24:15.999 [2024-11-20 16:14:46.536870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200000227240 len:0x10000 key:0x183f00 00:24:15.999 [2024-11-20 16:14:46.536900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000714fa80 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.536930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000700f080 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.536961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.536978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000716fb80 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.536994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.537011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b2efd80 len:0x10000 key:0x183e00 00:24:15.999 [2024-11-20 16:14:46.537025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.537042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000709f500 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.537055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.537073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000717fc00 len:0x10000 key:0x183900 00:24:15.999 [2024-11-20 16:14:46.537086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.537103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f82000 len:0x10000 key:0x183e00 00:24:15.999 [2024-11-20 16:14:46.537116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:15.999 [2024-11-20 16:14:46.537136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fa3000 len:0x10000 key:0x183e00 00:24:15.999 [2024-11-20 16:14:46.537149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fc4000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011fe5000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012006000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012027000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012048000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012069000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ccd000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c987000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e55f000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001086f000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001084e000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124ec000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124cb000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011f61000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbdd000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bbbc000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb9b000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bb7a000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f096000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0b7000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f0d8000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011ebc000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e9b000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.000 [2024-11-20 16:14:46.537900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011e7a000 len:0x10000 key:0x183e00 00:24:16.000 [2024-11-20 16:14:46.537913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.537931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bce5000 len:0x10000 key:0x183e00 00:24:16.001 [2024-11-20 16:14:46.537944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.537962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bcc4000 len:0x10000 key:0x183e00 00:24:16.001 [2024-11-20 16:14:46.537975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.537993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bca3000 len:0x10000 key:0x183e00 00:24:16.001 [2024-11-20 16:14:46.538006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.538024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bc82000 len:0x10000 key:0x183e00 00:24:16.001 [2024-11-20 16:14:46.538036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:849a p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542646] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019283040 was disconnected and freed. reset controller. 00:24:16.001 [2024-11-20 16:14:46.542676] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.001 [2024-11-20 16:14:46.542699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000059fb80 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.542714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000042f000 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.542753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000045f180 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.542785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000051f780 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.542816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001952fa00 len:0x10000 key:0x182a00 00:24:16.001 [2024-11-20 16:14:46.542847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000080ea80 len:0x10000 key:0x183b00 00:24:16.001 [2024-11-20 16:14:46.542878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000008bf000 len:0x10000 key:0x183b00 00:24:16.001 [2024-11-20 16:14:46.542909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194df780 len:0x10000 key:0x182a00 00:24:16.001 [2024-11-20 16:14:46.542940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195cff00 len:0x10000 key:0x182a00 00:24:16.001 [2024-11-20 16:14:46.542971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.542988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000082eb80 len:0x10000 key:0x183b00 00:24:16.001 [2024-11-20 16:14:46.543001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194bf680 len:0x10000 key:0x182a00 00:24:16.001 [2024-11-20 16:14:46.543036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001950f900 len:0x10000 key:0x182a00 00:24:16.001 [2024-11-20 16:14:46.543067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000055f980 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.543098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000052f800 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.543128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004af400 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.543159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000048f300 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.543189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001957fc80 len:0x10000 key:0x182a00 00:24:16.001 [2024-11-20 16:14:46.543220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000087ee00 len:0x10000 key:0x183b00 00:24:16.001 [2024-11-20 16:14:46.543250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004bf480 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.543281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005efe00 len:0x10000 key:0x183a00 00:24:16.001 [2024-11-20 16:14:46.543312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.001 [2024-11-20 16:14:46.543329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000088ee80 len:0x10000 key:0x183b00 00:24:16.001 [2024-11-20 16:14:46.543343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001958fd00 len:0x10000 key:0x182a00 00:24:16.002 [2024-11-20 16:14:46.543374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001953fa80 len:0x10000 key:0x182a00 00:24:16.002 [2024-11-20 16:14:46.543406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005afc00 len:0x10000 key:0x183a00 00:24:16.002 [2024-11-20 16:14:46.543437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000040ef00 len:0x10000 key:0x183a00 00:24:16.002 [2024-11-20 16:14:46.543468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000057fa80 len:0x10000 key:0x183a00 00:24:16.002 [2024-11-20 16:14:46.543498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000194ff880 len:0x10000 key:0x182a00 00:24:16.002 [2024-11-20 16:14:46.543556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000054f900 len:0x10000 key:0x183a00 00:24:16.002 [2024-11-20 16:14:46.543587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000004ef600 len:0x10000 key:0x183a00 00:24:16.002 [2024-11-20 16:14:46.543618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001955fb80 len:0x10000 key:0x182a00 00:24:16.002 [2024-11-20 16:14:46.543648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001954fb00 len:0x10000 key:0x182a00 00:24:16.002 [2024-11-20 16:14:46.543681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000083ec00 len:0x10000 key:0x183b00 00:24:16.002 [2024-11-20 16:14:46.543712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005cfd00 len:0x10000 key:0x183a00 00:24:16.002 [2024-11-20 16:14:46.543744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000195bfe80 len:0x10000 key:0x182a00 00:24:16.002 [2024-11-20 16:14:46.543775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000005dfd80 len:0x10000 key:0x183a00 00:24:16.002 [2024-11-20 16:14:46.543808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000053f880 len:0x10000 key:0x183a00 00:24:16.002 [2024-11-20 16:14:46.543838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e95e000 len:0x10000 key:0x183e00 00:24:16.002 [2024-11-20 16:14:46.543869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e97f000 len:0x10000 key:0x183e00 00:24:16.002 [2024-11-20 16:14:46.543900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131af000 len:0x10000 key:0x183e00 00:24:16.002 [2024-11-20 16:14:46.543932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001318e000 len:0x10000 key:0x183e00 00:24:16.002 [2024-11-20 16:14:46.543963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.543981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001316d000 len:0x10000 key:0x183e00 00:24:16.002 [2024-11-20 16:14:46.543994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.544012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001314c000 len:0x10000 key:0x183e00 00:24:16.002 [2024-11-20 16:14:46.544025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.544043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001312b000 len:0x10000 key:0x183e00 00:24:16.002 [2024-11-20 16:14:46.544056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.544073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001310a000 len:0x10000 key:0x183e00 00:24:16.002 [2024-11-20 16:14:46.544088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.002 [2024-11-20 16:14:46.544106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013275000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013254000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013233000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013212000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131f1000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000131d0000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c24f000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c22e000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010536000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010557000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010578000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e13f000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ef000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000118ce000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdab000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd8a000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd69000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd48000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd27000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.544709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bd06000 len:0x10000 key:0x183e00 00:24:16.003 [2024-11-20 16:14:46.544722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:6b78 p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.547929] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257880 was disconnected and freed. reset controller. 00:24:16.003 [2024-11-20 16:14:46.547956] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.003 [2024-11-20 16:14:46.547977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196cfd00 len:0x10000 key:0x182b00 00:24:16.003 [2024-11-20 16:14:46.547991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.548018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001966fa00 len:0x10000 key:0x182b00 00:24:16.003 [2024-11-20 16:14:46.548032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.548050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b8fd00 len:0x10000 key:0x182d00 00:24:16.003 [2024-11-20 16:14:46.548063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.548081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001942f200 len:0x10000 key:0x182a00 00:24:16.003 [2024-11-20 16:14:46.548095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.003 [2024-11-20 16:14:46.548113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bf0000 len:0x10000 key:0x182d00 00:24:16.003 [2024-11-20 16:14:46.548126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ef800 len:0x10000 key:0x182c00 00:24:16.004 [2024-11-20 16:14:46.548157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001944f300 len:0x10000 key:0x182a00 00:24:16.004 [2024-11-20 16:14:46.548188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001962f800 len:0x10000 key:0x182b00 00:24:16.004 [2024-11-20 16:14:46.548219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196afc00 len:0x10000 key:0x182b00 00:24:16.004 [2024-11-20 16:14:46.548250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001941f180 len:0x10000 key:0x182a00 00:24:16.004 [2024-11-20 16:14:46.548281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001999fd80 len:0x10000 key:0x182c00 00:24:16.004 [2024-11-20 16:14:46.548310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001964f900 len:0x10000 key:0x182b00 00:24:16.004 [2024-11-20 16:14:46.548342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001948f500 len:0x10000 key:0x182a00 00:24:16.004 [2024-11-20 16:14:46.548376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198df780 len:0x10000 key:0x182c00 00:24:16.004 [2024-11-20 16:14:46.548407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001967fa80 len:0x10000 key:0x182b00 00:24:16.004 [2024-11-20 16:14:46.548437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199f0000 len:0x10000 key:0x182c00 00:24:16.004 [2024-11-20 16:14:46.548467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001982f200 len:0x10000 key:0x182c00 00:24:16.004 [2024-11-20 16:14:46.548498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001985f380 len:0x10000 key:0x182c00 00:24:16.004 [2024-11-20 16:14:46.548538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001963f880 len:0x10000 key:0x182b00 00:24:16.004 [2024-11-20 16:14:46.548569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bbfe80 len:0x10000 key:0x182d00 00:24:16.004 [2024-11-20 16:14:46.548600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001949f580 len:0x10000 key:0x182a00 00:24:16.004 [2024-11-20 16:14:46.548631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196efe00 len:0x10000 key:0x182b00 00:24:16.004 [2024-11-20 16:14:46.548662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019b9fd80 len:0x10000 key:0x182d00 00:24:16.004 [2024-11-20 16:14:46.548693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.004 [2024-11-20 16:14:46.548711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001986f400 len:0x10000 key:0x182c00 00:24:16.005 [2024-11-20 16:14:46.548727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.548744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001969fb80 len:0x10000 key:0x182b00 00:24:16.005 [2024-11-20 16:14:46.548757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.548775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001940f100 len:0x10000 key:0x182a00 00:24:16.005 [2024-11-20 16:14:46.548788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.548806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196dfd80 len:0x10000 key:0x182b00 00:24:16.005 [2024-11-20 16:14:46.548819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.548838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000199cff00 len:0x10000 key:0x182c00 00:24:16.005 [2024-11-20 16:14:46.548850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.548868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000198ff880 len:0x10000 key:0x182c00 00:24:16.005 [2024-11-20 16:14:46.548881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.548898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019bafe00 len:0x10000 key:0x182d00 00:24:16.005 [2024-11-20 16:14:46.548912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.548929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88576 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001991f980 len:0x10000 key:0x182c00 00:24:16.005 [2024-11-20 16:14:46.548943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.548960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:88704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000196bfc80 len:0x10000 key:0x182b00 00:24:16.005 [2024-11-20 16:14:46.548974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.548992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88832 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001984f300 len:0x10000 key:0x182c00 00:24:16.005 [2024-11-20 16:14:46.549005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001992fa00 len:0x10000 key:0x182c00 00:24:16.005 [2024-11-20 16:14:46.549036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123a2000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123c3000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000123e4000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012405000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012426000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012447000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012468000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012489000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000120ed000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cda7000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ed9f000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c8f000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010c6e000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126fc000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000126db000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012381000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011952000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c81c000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c7fb000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfdc000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bfbb000 len:0x10000 key:0x183e00 00:24:16.005 [2024-11-20 16:14:46.549715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.005 [2024-11-20 16:14:46.549733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf9a000 len:0x10000 key:0x183e00 00:24:16.006 [2024-11-20 16:14:46.549746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.549765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf79000 len:0x10000 key:0x183e00 00:24:16.006 [2024-11-20 16:14:46.549780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.549798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf58000 len:0x10000 key:0x183e00 00:24:16.006 [2024-11-20 16:14:46.549810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.549829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf37000 len:0x10000 key:0x183e00 00:24:16.006 [2024-11-20 16:14:46.549842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.549861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bf16000 len:0x10000 key:0x183e00 00:24:16.006 [2024-11-20 16:14:46.549874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.549893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8d6000 len:0x10000 key:0x183e00 00:24:16.006 [2024-11-20 16:14:46.549906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.549924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f8f7000 len:0x10000 key:0x183e00 00:24:16.006 [2024-11-20 16:14:46.549937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.549956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f918000 len:0x10000 key:0x183e00 00:24:16.006 [2024-11-20 16:14:46.549971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.549989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000122dc000 len:0x10000 key:0x183e00 00:24:16.006 [2024-11-20 16:14:46.550002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:4e5a p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.552841] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257640 was disconnected and freed. reset controller. 00:24:16.006 [2024-11-20 16:14:46.552860] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.006 [2024-11-20 16:14:46.552877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d4fb00 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.552888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.552907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fcff00 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.552918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.552933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c0f100 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.552945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.552963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ccf700 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.552974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.552989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019edf780 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.553000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dafe00 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a6fa00 len:0x10000 key:0x182d00 00:24:16.006 [2024-11-20 16:14:46.553068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e8f500 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.553093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f7fc80 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.553119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019dcff00 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e6f400 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.553171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ebf680 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.553196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d0f900 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cdf780 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c5f380 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c3f280 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f2fa00 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.553327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a2f800 len:0x10000 key:0x182d00 00:24:16.006 [2024-11-20 16:14:46.553352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c6f400 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d9fd80 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019a3f880 len:0x10000 key:0x182d00 00:24:16.006 [2024-11-20 16:14:46.553428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f3fa80 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.553454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eef800 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.553479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d5fb80 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019fafe00 len:0x10000 key:0x182f00 00:24:16.006 [2024-11-20 16:14:46.553537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.006 [2024-11-20 16:14:46.553552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d2fa00 len:0x10000 key:0x182e00 00:24:16.006 [2024-11-20 16:14:46.553564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eaf600 len:0x10000 key:0x182f00 00:24:16.007 [2024-11-20 16:14:46.553591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cff880 len:0x10000 key:0x182e00 00:24:16.007 [2024-11-20 16:14:46.553617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019c9f580 len:0x10000 key:0x182e00 00:24:16.007 [2024-11-20 16:14:46.553642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f0f900 len:0x10000 key:0x182f00 00:24:16.007 [2024-11-20 16:14:46.553668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019eff880 len:0x10000 key:0x182f00 00:24:16.007 [2024-11-20 16:14:46.553694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019ddff80 len:0x10000 key:0x182e00 00:24:16.007 [2024-11-20 16:14:46.553719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d7fc80 len:0x10000 key:0x182e00 00:24:16.007 [2024-11-20 16:14:46.553745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019f6fc00 len:0x10000 key:0x182f00 00:24:16.007 [2024-11-20 16:14:46.553771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019d8fd00 len:0x10000 key:0x182e00 00:24:16.007 [2024-11-20 16:14:46.553797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019cef800 len:0x10000 key:0x182e00 00:24:16.007 [2024-11-20 16:14:46.553822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f19e000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.553850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f1bf000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.553877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001333b000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.553903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001331a000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.553930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013485000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.553956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.553971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013464000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.553982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013443000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013422000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013401000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133e0000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c45f000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c43e000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c41d000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3fc000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3db000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c3ba000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e457000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e436000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e415000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010935000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010914000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c20d000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1ec000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1cb000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c1aa000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.007 [2024-11-20 16:14:46.554503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c189000 len:0x10000 key:0x183e00 00:24:16.007 [2024-11-20 16:14:46.554514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.554535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c168000 len:0x10000 key:0x183e00 00:24:16.008 [2024-11-20 16:14:46.554546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.554561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c147000 len:0x10000 key:0x183e00 00:24:16.008 [2024-11-20 16:14:46.554572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:cafe p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557213] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019257400 was disconnected and freed. reset controller. 00:24:16.008 [2024-11-20 16:14:46.557230] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.008 [2024-11-20 16:14:46.557247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e2f200 len:0x10000 key:0x182f00 00:24:16.008 [2024-11-20 16:14:46.557258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2af600 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a2df780 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a39fd80 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5afe00 len:0x10000 key:0x183100 00:24:16.008 [2024-11-20 16:14:46.557380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a02f800 len:0x10000 key:0x183000 00:24:16.008 [2024-11-20 16:14:46.557409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0dfd80 len:0x10000 key:0x183000 00:24:16.008 [2024-11-20 16:14:46.557435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a55fb80 len:0x10000 key:0x183100 00:24:16.008 [2024-11-20 16:14:46.557461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a25f380 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a04f900 len:0x10000 key:0x183000 00:24:16.008 [2024-11-20 16:14:46.557514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a53fa80 len:0x10000 key:0x183100 00:24:16.008 [2024-11-20 16:14:46.557548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a58fd00 len:0x10000 key:0x183100 00:24:16.008 [2024-11-20 16:14:46.557574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3dff80 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3afe00 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a32fa00 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a30f900 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a20f100 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a09fb80 len:0x10000 key:0x183000 00:24:16.008 [2024-11-20 16:14:46.557732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a33fa80 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a01f780 len:0x10000 key:0x183000 00:24:16.008 [2024-11-20 16:14:46.557784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a0afc00 len:0x10000 key:0x183000 00:24:16.008 [2024-11-20 16:14:46.557810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a21f180 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5bfe80 len:0x10000 key:0x183100 00:24:16.008 [2024-11-20 16:14:46.557862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e3f280 len:0x10000 key:0x182f00 00:24:16.008 [2024-11-20 16:14:46.557888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a28f500 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e0f100 len:0x10000 key:0x182f00 00:24:16.008 [2024-11-20 16:14:46.557940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a57fc80 len:0x10000 key:0x183100 00:24:16.008 [2024-11-20 16:14:46.557966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.557981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3cff00 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.557993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.558008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a36fc00 len:0x10000 key:0x183300 00:24:16.008 [2024-11-20 16:14:46.558020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.558034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5dff80 len:0x10000 key:0x183100 00:24:16.008 [2024-11-20 16:14:46.558045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.558060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a5cff00 len:0x10000 key:0x183100 00:24:16.008 [2024-11-20 16:14:46.558072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.558086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a05f980 len:0x10000 key:0x183000 00:24:16.008 [2024-11-20 16:14:46.558098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.008 [2024-11-20 16:14:46.558112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200019e5f380 len:0x10000 key:0x182f00 00:24:16.009 [2024-11-20 16:14:46.558124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a24f300 len:0x10000 key:0x183300 00:24:16.009 [2024-11-20 16:14:46.558150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a00f700 len:0x10000 key:0x183000 00:24:16.009 [2024-11-20 16:14:46.558175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a3bfe80 len:0x10000 key:0x183300 00:24:16.009 [2024-11-20 16:14:46.558202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5be000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f5df000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135cf000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135ae000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001358d000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001356c000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001354b000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001352a000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013695000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013674000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013653000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013632000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200013611000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000135f0000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c66f000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c64e000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec97000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec76000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ec55000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f65000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010f44000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c62d000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c60c000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5eb000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5ca000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c5a9000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.009 [2024-11-20 16:14:46.558923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c588000 len:0x10000 key:0x183e00 00:24:16.009 [2024-11-20 16:14:46.558935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.558950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c567000 len:0x10000 key:0x183e00 00:24:16.010 [2024-11-20 16:14:46.558962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1af8 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.561733] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192571c0 was disconnected and freed. reset controller. 00:24:16.010 [2024-11-20 16:14:46.561750] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.010 [2024-11-20 16:14:46.561767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6ff880 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.561779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.561797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a97fc80 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.561809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.561825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9afe00 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.561836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.561852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a67f480 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.561864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.561879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a88f500 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.561891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.561907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a75fb80 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.561919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.561934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a41f780 len:0x10000 key:0x183100 00:24:16.010 [2024-11-20 16:14:46.561945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.561963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a83f280 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.561975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.561991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a92fa00 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a77fc80 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a81f180 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a86f400 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6bf680 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a68f500 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a60f100 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a9dff80 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8df780 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7cff00 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a61f180 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a74fb00 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a7dff80 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8ef800 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a89f580 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a70f900 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a95fb80 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6df780 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a85f380 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a6af600 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a64f300 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8bf680 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a8af600 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a78fd00 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a72fa00 len:0x10000 key:0x183500 00:24:16.010 [2024-11-20 16:14:46.562664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.010 [2024-11-20 16:14:46.562679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a91f980 len:0x10000 key:0x183400 00:24:16.010 [2024-11-20 16:14:46.562691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a73fa80 len:0x10000 key:0x183500 00:24:16.011 [2024-11-20 16:14:46.562717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001a69f580 len:0x10000 key:0x183500 00:24:16.011 [2024-11-20 16:14:46.562744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9de000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.562771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000f9ff000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.562799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001375b000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.562826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001373a000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.562854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df71000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.562883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e0bb000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.562911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e09a000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.562938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e079000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.562967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.562983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e058000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.562995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e037000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e016000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dff5000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfd4000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000dfb3000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df92000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b421000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001208a000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bef5000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bed4000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000beb3000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be92000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be71000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be50000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2cf000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d2ae000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d28d000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d26c000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.563511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200010599000 len:0x10000 key:0x183e00 00:24:16.011 [2024-11-20 16:14:46.563529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:1bf6 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.566498] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256f80 was disconnected and freed. reset controller. 00:24:16.011 [2024-11-20 16:14:46.566523] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.011 [2024-11-20 16:14:46.566540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adcff00 len:0x10000 key:0x183700 00:24:16.011 [2024-11-20 16:14:46.566552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.566571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac5f380 len:0x10000 key:0x183700 00:24:16.011 [2024-11-20 16:14:46.566583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.566599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac8f500 len:0x10000 key:0x183700 00:24:16.011 [2024-11-20 16:14:46.566611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.566627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad4fb00 len:0x10000 key:0x183700 00:24:16.011 [2024-11-20 16:14:46.566639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.011 [2024-11-20 16:14:46.566654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af5fb80 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.566666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa3f880 len:0x10000 key:0x184200 00:24:16.012 [2024-11-20 16:14:46.566694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaefe00 len:0x10000 key:0x184200 00:24:16.012 [2024-11-20 16:14:46.566721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af0f900 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.566749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac0f100 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.566776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa5f980 len:0x10000 key:0x184200 00:24:16.012 [2024-11-20 16:14:46.566807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeef800 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.566834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af3fa80 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.566861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad8fd00 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.566888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad5fb80 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.566915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acdf780 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.566942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acbf680 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.566969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.566985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afafe00 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.566997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aaafc00 len:0x10000 key:0x184200 00:24:16.012 [2024-11-20 16:14:46.567026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001acef800 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.567053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa2f800 len:0x10000 key:0x184200 00:24:16.012 [2024-11-20 16:14:46.567080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aabfc80 len:0x10000 key:0x184200 00:24:16.012 [2024-11-20 16:14:46.567109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001afbfe80 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.567136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af6fc00 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.567163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001addff80 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.567190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ac3f280 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.567217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001adafe00 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.567244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af2fa00 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.567271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad7fc80 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.567299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad1f980 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.567327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af8fd00 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.567354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001af7fc80 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.567381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa6fa00 len:0x10000 key:0x184200 00:24:16.012 [2024-11-20 16:14:46.567410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa0f700 len:0x10000 key:0x184200 00:24:16.012 [2024-11-20 16:14:46.567437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aff0000 len:0x10000 key:0x184400 00:24:16.012 [2024-11-20 16:14:46.567464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aa1f780 len:0x10000 key:0x184200 00:24:16.012 [2024-11-20 16:14:46.567491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ad6fc00 len:0x10000 key:0x183700 00:24:16.012 [2024-11-20 16:14:46.567524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fdfe000 len:0x10000 key:0x183e00 00:24:16.012 [2024-11-20 16:14:46.567552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000fe1f000 len:0x10000 key:0x183e00 00:24:16.012 [2024-11-20 16:14:46.567580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b463000 len:0x10000 key:0x183e00 00:24:16.012 [2024-11-20 16:14:46.567608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b442000 len:0x10000 key:0x183e00 00:24:16.012 [2024-11-20 16:14:46.567635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.012 [2024-11-20 16:14:46.567651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b739000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b718000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6f7000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6d6000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b6b5000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b694000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b673000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b652000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b631000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b610000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca8f000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ca6e000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.567984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d93000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.567996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d72000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011d51000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c105000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0e4000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0c3000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c0a2000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c081000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c060000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4df000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d4be000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.568299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d49d000 len:0x10000 key:0x183e00 00:24:16.013 [2024-11-20 16:14:46.568311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:be12 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.570835] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256d40 was disconnected and freed. reset controller. 00:24:16.013 [2024-11-20 16:14:46.570857] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.013 [2024-11-20 16:14:46.570873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:83968 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aeaf600 len:0x10000 key:0x184400 00:24:16.013 [2024-11-20 16:14:46.570885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.570909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:84096 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b32fa00 len:0x10000 key:0x183800 00:24:16.013 [2024-11-20 16:14:46.570929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.570945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:84224 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b35fb80 len:0x10000 key:0x183800 00:24:16.013 [2024-11-20 16:14:46.570956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.570971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae2f200 len:0x10000 key:0x184400 00:24:16.013 [2024-11-20 16:14:46.570993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.013 [2024-11-20 16:14:46.571012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:84480 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b23f280 len:0x10000 key:0x183800 00:24:16.013 [2024-11-20 16:14:46.571025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:84608 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b02f800 len:0x10000 key:0x183600 00:24:16.014 [2024-11-20 16:14:46.571055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84736 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0dfd80 len:0x10000 key:0x183600 00:24:16.014 [2024-11-20 16:14:46.571085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5dff80 len:0x10000 key:0x183d00 00:24:16.014 [2024-11-20 16:14:46.571116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84992 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2df780 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85120 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b04f900 len:0x10000 key:0x183600 00:24:16.014 [2024-11-20 16:14:46.571176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:85248 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b5bfe80 len:0x10000 key:0x183d00 00:24:16.014 [2024-11-20 16:14:46.571206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85376 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b21f180 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85504 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae6f400 len:0x10000 key:0x184400 00:24:16.014 [2024-11-20 16:14:46.571269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85632 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae3f280 len:0x10000 key:0x184400 00:24:16.014 [2024-11-20 16:14:46.571299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85760 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3afe00 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85888 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b38fd00 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:86016 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b28f500 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:86144 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b09fb80 len:0x10000 key:0x183600 00:24:16.014 [2024-11-20 16:14:46.571420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:86272 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3bfe80 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86400 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b01f780 len:0x10000 key:0x183600 00:24:16.014 [2024-11-20 16:14:46.571481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:86528 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b0afc00 len:0x10000 key:0x183600 00:24:16.014 [2024-11-20 16:14:46.571511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86656 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b29f580 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86784 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b24f300 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:86912 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aebf680 len:0x10000 key:0x184400 00:24:16.014 [2024-11-20 16:14:46.571614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b30f900 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87168 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae8f500 len:0x10000 key:0x184400 00:24:16.014 [2024-11-20 16:14:46.571674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:87296 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b20f100 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87424 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae5f380 len:0x10000 key:0x184400 00:24:16.014 [2024-11-20 16:14:46.571735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b3f0000 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:87680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b26f400 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:87808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b25f380 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b05f980 len:0x10000 key:0x183600 00:24:16.014 [2024-11-20 16:14:46.571855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88064 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001aedf780 len:0x10000 key:0x184400 00:24:16.014 [2024-11-20 16:14:46.571885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:88192 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b2cf700 len:0x10000 key:0x183800 00:24:16.014 [2024-11-20 16:14:46.571917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:88320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b00f700 len:0x10000 key:0x183600 00:24:16.014 [2024-11-20 16:14:46.571948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88448 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001ae4f300 len:0x10000 key:0x184400 00:24:16.014 [2024-11-20 16:14:46.571977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.571994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:78080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001021e000 len:0x10000 key:0x183e00 00:24:16.014 [2024-11-20 16:14:46.572007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.014 [2024-11-20 16:14:46.572026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:78208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001023f000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:78848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b77b000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:78976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b75a000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e856000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:79232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e877000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:79360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e898000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011cac000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c8b000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:80000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011c6a000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:80128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8c5000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b8a4000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b883000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b862000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b841000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000b820000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000124aa000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c315000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2f4000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:82304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2d3000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:82432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c2b2000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:82560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c291000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:82688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c270000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:82816 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ef000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:83072 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ce000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:83328 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d6ad000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:83712 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000d68c000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.572844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:83840 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000e3f4000 len:0x10000 key:0x183e00 00:24:16.015 [2024-11-20 16:14:46.572857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:9fb0 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.575588] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x200019256b00 was disconnected and freed. reset controller. 00:24:16.015 [2024-11-20 16:14:46.575607] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.015 [2024-11-20 16:14:46.575626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61824 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b93fa80 len:0x10000 key:0x184100 00:24:16.015 [2024-11-20 16:14:46.575638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.575658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61952 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b69f580 len:0x10000 key:0x183200 00:24:16.015 [2024-11-20 16:14:46.575682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.575700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:62080 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7cff00 len:0x10000 key:0x183200 00:24:16.015 [2024-11-20 16:14:46.575713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.575730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:62208 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b92fa00 len:0x10000 key:0x184100 00:24:16.015 [2024-11-20 16:14:46.575743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.575761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:62336 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b6df780 len:0x10000 key:0x183200 00:24:16.015 [2024-11-20 16:14:46.575774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.575791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:62464 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b46fa00 len:0x10000 key:0x183d00 00:24:16.015 [2024-11-20 16:14:46.575805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.575822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:62592 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b41f780 len:0x10000 key:0x183d00 00:24:16.015 [2024-11-20 16:14:46.575835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.015 [2024-11-20 16:14:46.575852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62720 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7f0000 len:0x10000 key:0x183200 00:24:16.015 [2024-11-20 16:14:46.575866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.575883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:62848 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7afe00 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.575896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.575913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:62976 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b79fd80 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.575927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.575944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63104 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b64f300 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.575957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.575974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:63232 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8ef800 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.575987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63360 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b96fc00 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:63488 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b91f980 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63616 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b4afc00 len:0x10000 key:0x183d00 00:24:16.016 [2024-11-20 16:14:46.576081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:63744 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8af600 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:63872 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b73fa80 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.576143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:64000 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b44f900 len:0x10000 key:0x183d00 00:24:16.016 [2024-11-20 16:14:46.576173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:64128 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b70f900 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.576203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:64256 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b65f380 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.576234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:64384 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b78fd00 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.576264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:64512 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b77fc80 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.576294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64640 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b60f100 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.576324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:64768 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b99fd80 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:64896 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b98fd00 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:65024 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b49fb80 len:0x10000 key:0x183d00 00:24:16.016 [2024-11-20 16:14:46.576419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:65152 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b63f280 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.576449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:65280 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8bf680 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:65408 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b43f880 len:0x10000 key:0x183d00 00:24:16.016 [2024-11-20 16:14:46.576509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:65536 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9afe00 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:65664 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b40f700 len:0x10000 key:0x183d00 00:24:16.016 [2024-11-20 16:14:46.576576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:65792 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b89f580 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:65920 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b9f0000 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b94fb00 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:66176 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b8cf700 len:0x10000 key:0x184100 00:24:16.016 [2024-11-20 16:14:46.576700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66304 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b7bfe80 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.576733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66432 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b71f980 len:0x10000 key:0x183200 00:24:16.016 [2024-11-20 16:14:46.576763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:66560 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b47fa80 len:0x10000 key:0x183d00 00:24:16.016 [2024-11-20 16:14:46.576794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:66688 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001b48fb00 len:0x10000 key:0x183d00 00:24:16.016 [2024-11-20 16:14:46.576825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55552 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bded000 len:0x10000 key:0x183e00 00:24:16.016 [2024-11-20 16:14:46.576855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55680 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be0e000 len:0x10000 key:0x183e00 00:24:16.016 [2024-11-20 16:14:46.576887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55808 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000be2f000 len:0x10000 key:0x183e00 00:24:16.016 [2024-11-20 16:14:46.576918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55936 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d2c000 len:0x10000 key:0x183e00 00:24:16.016 [2024-11-20 16:14:46.576949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.016 [2024-11-20 16:14:46.576967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:56320 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012d4d000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.576980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.576998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:56704 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001063e000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56960 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001065f000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:57344 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000db0f000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:57472 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200011808000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:57984 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117e7000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:58112 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000df50000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:58240 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117c6000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:58496 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000117a5000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:58752 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecb8000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:58880 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000ecd9000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:59264 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000bdcc000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:59520 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000121b3000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:60160 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012192000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:60544 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012171000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:60672 len:128 SGL KEYED DATA BLOCK ADDRESS 0x200012150000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:60800 len:128 SGL KEYED DATA BLOCK ADDRESS 0x2000133bf000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:60928 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20001339e000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61056 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000c336000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61184 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbfa000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.577603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61312 len:128 SGL KEYED DATA BLOCK ADDRESS 0x20000cbd9000 len:0x10000 key:0x183e00 00:24:16.017 [2024-11-20 16:14:46.577616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:53966 cdw0:43421000 sqhd:73a4 p:0 m:0 dnr:0 00:24:16.017 [2024-11-20 16:14:46.595505] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192568c0 was disconnected and freed. reset controller. 00:24:16.017 [2024-11-20 16:14:46.595530] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595588] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595608] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595621] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595633] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595645] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595657] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595669] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595681] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595694] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 [2024-11-20 16:14:46.595707] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:16.017 task offset: 86016 on job bdev=Nvme1n1 fails 00:24:16.017 00:24:16.017 Latency(us) 00:24:16.017 [2024-11-20T15:14:46.822Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme1n1 ended in about 1.97 seconds with error 00:24:16.017 Verification LBA range: start 0x0 length 0x400 00:24:16.017 Nvme1n1 : 1.97 331.94 20.75 32.53 0.00 175186.78 42572.19 1087163.60 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme2n1 ended in about 1.97 seconds with error 00:24:16.017 Verification LBA range: start 0x0 length 0x400 00:24:16.017 Nvme2n1 : 1.97 320.79 20.05 32.49 0.00 179909.43 10485.76 1087163.60 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme3n1 ended in about 1.98 seconds with error 00:24:16.017 Verification LBA range: start 0x0 length 0x400 00:24:16.017 Nvme3n1 : 1.98 317.68 19.86 32.38 0.00 180957.55 43201.33 1093874.48 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme4n1 ended in about 1.98 seconds with error 00:24:16.017 Verification LBA range: start 0x0 length 0x400 00:24:16.017 Nvme4n1 : 1.98 318.86 19.93 32.29 0.00 179805.17 19398.66 1093874.48 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme5n1 ended in about 1.99 seconds with error 00:24:16.017 Verification LBA range: start 0x0 length 0x400 00:24:16.017 Nvme5n1 : 1.99 316.11 19.76 32.22 0.00 180763.74 46137.34 1093874.48 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme6n1 ended in about 1.99 seconds with error 00:24:16.017 Verification LBA range: start 0x0 length 0x400 00:24:16.017 Nvme6n1 : 1.99 315.42 19.71 32.14 0.00 180666.92 46556.77 1087163.60 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme7n1 ended in about 2.00 seconds with error 00:24:16.017 Verification LBA range: start 0x0 length 0x400 00:24:16.017 Nvme7n1 : 2.00 314.70 19.67 32.07 0.00 180629.19 46766.49 1087163.60 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme8n1 ended in about 2.00 seconds with error 00:24:16.017 Verification LBA range: start 0x0 length 0x400 00:24:16.017 Nvme8n1 : 2.00 313.95 19.62 31.99 0.00 180265.06 46976.20 1087163.60 00:24:16.017 [2024-11-20T15:14:46.822Z] Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.018 [2024-11-20T15:14:46.823Z] Job: Nvme9n1 ended in about 2.00 seconds with error 00:24:16.018 Verification LBA range: start 0x0 length 0x400 00:24:16.018 Nvme9n1 : 2.00 313.24 19.58 31.92 0.00 180083.68 46347.06 1087163.60 00:24:16.018 [2024-11-20T15:14:46.823Z] Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.018 [2024-11-20T15:14:46.823Z] Job: Nvme10n1 ended in about 2.01 seconds with error 00:24:16.018 Verification LBA range: start 0x0 length 0x400 00:24:16.018 Nvme10n1 : 2.01 227.90 14.24 31.85 0.00 238450.60 45088.77 1087163.60 00:24:16.018 [2024-11-20T15:14:46.823Z] =================================================================================================================== 00:24:16.018 [2024-11-20T15:14:46.823Z] Total : 3090.59 193.16 321.88 0.00 184303.53 10485.76 1093874.48 00:24:16.018 [2024-11-20 16:14:46.618137] app.c: 910:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:16.018 [2024-11-20 16:14:46.618163] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:16.018 [2024-11-20 16:14:46.618179] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:16.018 [2024-11-20 16:14:46.618190] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:16.018 [2024-11-20 16:14:46.618202] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:16.018 [2024-11-20 16:14:46.618313] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:16.018 [2024-11-20 16:14:46.618326] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:16.018 [2024-11-20 16:14:46.618338] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:16.018 [2024-11-20 16:14:46.618349] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:16.018 [2024-11-20 16:14:46.618359] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:16.018 [2024-11-20 16:14:46.618370] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:16.018 [2024-11-20 16:14:46.629700] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.629723] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.629739] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:24:16.018 [2024-11-20 16:14:46.629826] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.629837] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.629844] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192e53c0 00:24:16.018 [2024-11-20 16:14:46.629929] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.629940] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.629948] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ba580 00:24:16.018 [2024-11-20 16:14:46.630032] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.630043] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.630051] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192dc7c0 00:24:16.018 [2024-11-20 16:14:46.630161] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.630172] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.630180] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192a89c0 00:24:16.018 [2024-11-20 16:14:46.630273] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.630284] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.630291] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192bd540 00:24:16.018 [2024-11-20 16:14:46.630398] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.630409] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.630416] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928f500 00:24:16.018 [2024-11-20 16:14:46.630527] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.630538] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.630549] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001928e180 00:24:16.018 [2024-11-20 16:14:46.630627] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.630637] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.630644] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x20001929c180 00:24:16.018 [2024-11-20 16:14:46.630722] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:24:16.018 [2024-11-20 16:14:46.630733] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:24:16.018 [2024-11-20 16:14:46.630740] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192c6100 00:24:16.278 16:14:46 -- target/shutdown.sh@141 -- # kill -9 1434707 00:24:16.278 16:14:46 -- target/shutdown.sh@143 -- # stoptarget 00:24:16.278 16:14:46 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:16.278 16:14:46 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:16.279 16:14:46 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:16.279 16:14:46 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:16.279 16:14:46 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:16.279 16:14:46 -- nvmf/common.sh@116 -- # sync 00:24:16.279 16:14:46 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:16.279 16:14:46 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:16.279 16:14:46 -- nvmf/common.sh@119 -- # set +e 00:24:16.279 16:14:46 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:16.279 16:14:46 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:16.279 rmmod nvme_rdma 00:24:16.279 rmmod nvme_fabrics 00:24:16.279 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh: line 120: 1434707 Killed $rootdir/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") -q 64 -o 65536 -w verify -t 10 00:24:16.279 16:14:46 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:16.279 16:14:46 -- nvmf/common.sh@123 -- # set -e 00:24:16.279 16:14:46 -- nvmf/common.sh@124 -- # return 0 00:24:16.279 16:14:46 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:24:16.279 16:14:46 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:16.279 16:14:46 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:16.279 00:24:16.279 real 0m5.302s 00:24:16.279 user 0m18.250s 00:24:16.279 sys 0m1.347s 00:24:16.279 16:14:46 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:16.279 16:14:46 -- common/autotest_common.sh@10 -- # set +x 00:24:16.279 ************************************ 00:24:16.279 END TEST nvmf_shutdown_tc3 00:24:16.279 ************************************ 00:24:16.279 16:14:47 -- target/shutdown.sh@150 -- # trap - SIGINT SIGTERM EXIT 00:24:16.279 00:24:16.279 real 0m25.294s 00:24:16.279 user 1m14.742s 00:24:16.279 sys 0m9.242s 00:24:16.279 16:14:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:16.279 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:24:16.279 ************************************ 00:24:16.279 END TEST nvmf_shutdown 00:24:16.279 ************************************ 00:24:16.538 16:14:47 -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:16.538 16:14:47 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:16.538 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:24:16.538 16:14:47 -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:16.538 16:14:47 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:16.539 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:24:16.539 16:14:47 -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:16.539 16:14:47 -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:16.539 16:14:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:16.539 16:14:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:16.539 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:24:16.539 ************************************ 00:24:16.539 START TEST nvmf_multicontroller 00:24:16.539 ************************************ 00:24:16.539 16:14:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=rdma 00:24:16.539 * Looking for test storage... 00:24:16.539 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:16.539 16:14:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:16.539 16:14:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:16.539 16:14:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:16.539 16:14:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:16.539 16:14:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:16.539 16:14:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:16.539 16:14:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:16.539 16:14:47 -- scripts/common.sh@335 -- # IFS=.-: 00:24:16.539 16:14:47 -- scripts/common.sh@335 -- # read -ra ver1 00:24:16.539 16:14:47 -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.539 16:14:47 -- scripts/common.sh@336 -- # read -ra ver2 00:24:16.539 16:14:47 -- scripts/common.sh@337 -- # local 'op=<' 00:24:16.539 16:14:47 -- scripts/common.sh@339 -- # ver1_l=2 00:24:16.539 16:14:47 -- scripts/common.sh@340 -- # ver2_l=1 00:24:16.539 16:14:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:16.539 16:14:47 -- scripts/common.sh@343 -- # case "$op" in 00:24:16.539 16:14:47 -- scripts/common.sh@344 -- # : 1 00:24:16.539 16:14:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:16.539 16:14:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.539 16:14:47 -- scripts/common.sh@364 -- # decimal 1 00:24:16.539 16:14:47 -- scripts/common.sh@352 -- # local d=1 00:24:16.539 16:14:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.539 16:14:47 -- scripts/common.sh@354 -- # echo 1 00:24:16.539 16:14:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:16.539 16:14:47 -- scripts/common.sh@365 -- # decimal 2 00:24:16.539 16:14:47 -- scripts/common.sh@352 -- # local d=2 00:24:16.539 16:14:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.539 16:14:47 -- scripts/common.sh@354 -- # echo 2 00:24:16.539 16:14:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:16.539 16:14:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:16.539 16:14:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:16.539 16:14:47 -- scripts/common.sh@367 -- # return 0 00:24:16.539 16:14:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.539 16:14:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:16.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.539 --rc genhtml_branch_coverage=1 00:24:16.539 --rc genhtml_function_coverage=1 00:24:16.539 --rc genhtml_legend=1 00:24:16.539 --rc geninfo_all_blocks=1 00:24:16.539 --rc geninfo_unexecuted_blocks=1 00:24:16.539 00:24:16.539 ' 00:24:16.539 16:14:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:16.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.539 --rc genhtml_branch_coverage=1 00:24:16.539 --rc genhtml_function_coverage=1 00:24:16.539 --rc genhtml_legend=1 00:24:16.539 --rc geninfo_all_blocks=1 00:24:16.539 --rc geninfo_unexecuted_blocks=1 00:24:16.539 00:24:16.539 ' 00:24:16.539 16:14:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:16.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.539 --rc genhtml_branch_coverage=1 00:24:16.539 --rc genhtml_function_coverage=1 00:24:16.539 --rc genhtml_legend=1 00:24:16.539 --rc geninfo_all_blocks=1 00:24:16.539 --rc geninfo_unexecuted_blocks=1 00:24:16.539 00:24:16.539 ' 00:24:16.539 16:14:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:16.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.539 --rc genhtml_branch_coverage=1 00:24:16.539 --rc genhtml_function_coverage=1 00:24:16.539 --rc genhtml_legend=1 00:24:16.539 --rc geninfo_all_blocks=1 00:24:16.539 --rc geninfo_unexecuted_blocks=1 00:24:16.539 00:24:16.539 ' 00:24:16.539 16:14:47 -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.539 16:14:47 -- nvmf/common.sh@7 -- # uname -s 00:24:16.539 16:14:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.539 16:14:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.539 16:14:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.539 16:14:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.539 16:14:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.539 16:14:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.539 16:14:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.539 16:14:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.539 16:14:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.539 16:14:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.799 16:14:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:16.799 16:14:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:16.799 16:14:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.799 16:14:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.799 16:14:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.799 16:14:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:16.799 16:14:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:16.799 16:14:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:16.799 16:14:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:16.799 16:14:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.799 16:14:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.799 16:14:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.799 16:14:47 -- paths/export.sh@5 -- # export PATH 00:24:16.799 16:14:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:16.799 16:14:47 -- nvmf/common.sh@46 -- # : 0 00:24:16.799 16:14:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:16.799 16:14:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:16.799 16:14:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:16.799 16:14:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:16.799 16:14:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:16.799 16:14:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:16.799 16:14:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:16.799 16:14:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:16.799 16:14:47 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:16.799 16:14:47 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:16.799 16:14:47 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:16.799 16:14:47 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:16.799 16:14:47 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:16.799 16:14:47 -- host/multicontroller.sh@18 -- # '[' rdma == rdma ']' 00:24:16.799 16:14:47 -- host/multicontroller.sh@19 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:24:16.799 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:24:16.799 16:14:47 -- host/multicontroller.sh@20 -- # exit 0 00:24:16.799 00:24:16.799 real 0m0.222s 00:24:16.799 user 0m0.117s 00:24:16.799 sys 0m0.123s 00:24:16.799 16:14:47 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:16.799 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:24:16.799 ************************************ 00:24:16.799 END TEST nvmf_multicontroller 00:24:16.799 ************************************ 00:24:16.799 16:14:47 -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:16.799 16:14:47 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:16.799 16:14:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:16.799 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:24:16.799 ************************************ 00:24:16.799 START TEST nvmf_aer 00:24:16.799 ************************************ 00:24:16.799 16:14:47 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=rdma 00:24:16.799 * Looking for test storage... 00:24:16.799 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:16.799 16:14:47 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:16.799 16:14:47 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:16.799 16:14:47 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:16.799 16:14:47 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:16.800 16:14:47 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:16.800 16:14:47 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:16.800 16:14:47 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:16.800 16:14:47 -- scripts/common.sh@335 -- # IFS=.-: 00:24:16.800 16:14:47 -- scripts/common.sh@335 -- # read -ra ver1 00:24:16.800 16:14:47 -- scripts/common.sh@336 -- # IFS=.-: 00:24:16.800 16:14:47 -- scripts/common.sh@336 -- # read -ra ver2 00:24:16.800 16:14:47 -- scripts/common.sh@337 -- # local 'op=<' 00:24:16.800 16:14:47 -- scripts/common.sh@339 -- # ver1_l=2 00:24:16.800 16:14:47 -- scripts/common.sh@340 -- # ver2_l=1 00:24:16.800 16:14:47 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:16.800 16:14:47 -- scripts/common.sh@343 -- # case "$op" in 00:24:16.800 16:14:47 -- scripts/common.sh@344 -- # : 1 00:24:16.800 16:14:47 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:16.800 16:14:47 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:16.800 16:14:47 -- scripts/common.sh@364 -- # decimal 1 00:24:16.800 16:14:47 -- scripts/common.sh@352 -- # local d=1 00:24:16.800 16:14:47 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:16.800 16:14:47 -- scripts/common.sh@354 -- # echo 1 00:24:16.800 16:14:47 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:16.800 16:14:47 -- scripts/common.sh@365 -- # decimal 2 00:24:16.800 16:14:47 -- scripts/common.sh@352 -- # local d=2 00:24:16.800 16:14:47 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:16.800 16:14:47 -- scripts/common.sh@354 -- # echo 2 00:24:16.800 16:14:47 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:16.800 16:14:47 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:16.800 16:14:47 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:16.800 16:14:47 -- scripts/common.sh@367 -- # return 0 00:24:16.800 16:14:47 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:16.800 16:14:47 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:16.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.800 --rc genhtml_branch_coverage=1 00:24:16.800 --rc genhtml_function_coverage=1 00:24:16.800 --rc genhtml_legend=1 00:24:16.800 --rc geninfo_all_blocks=1 00:24:16.800 --rc geninfo_unexecuted_blocks=1 00:24:16.800 00:24:16.800 ' 00:24:16.800 16:14:47 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:16.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.800 --rc genhtml_branch_coverage=1 00:24:16.800 --rc genhtml_function_coverage=1 00:24:16.800 --rc genhtml_legend=1 00:24:16.800 --rc geninfo_all_blocks=1 00:24:16.800 --rc geninfo_unexecuted_blocks=1 00:24:16.800 00:24:16.800 ' 00:24:16.800 16:14:47 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:16.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.800 --rc genhtml_branch_coverage=1 00:24:16.800 --rc genhtml_function_coverage=1 00:24:16.800 --rc genhtml_legend=1 00:24:16.800 --rc geninfo_all_blocks=1 00:24:16.800 --rc geninfo_unexecuted_blocks=1 00:24:16.800 00:24:16.800 ' 00:24:16.800 16:14:47 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:16.800 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:16.800 --rc genhtml_branch_coverage=1 00:24:16.800 --rc genhtml_function_coverage=1 00:24:16.800 --rc genhtml_legend=1 00:24:16.800 --rc geninfo_all_blocks=1 00:24:16.800 --rc geninfo_unexecuted_blocks=1 00:24:16.800 00:24:16.800 ' 00:24:16.800 16:14:47 -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:16.800 16:14:47 -- nvmf/common.sh@7 -- # uname -s 00:24:16.800 16:14:47 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:16.800 16:14:47 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:16.800 16:14:47 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:16.800 16:14:47 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:16.800 16:14:47 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:16.800 16:14:47 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:16.800 16:14:47 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:16.800 16:14:47 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:16.800 16:14:47 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:16.800 16:14:47 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:16.800 16:14:47 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:16.800 16:14:47 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:16.800 16:14:47 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:16.800 16:14:47 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:16.800 16:14:47 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:16.800 16:14:47 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:17.059 16:14:47 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:17.059 16:14:47 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:17.059 16:14:47 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:17.059 16:14:47 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.059 16:14:47 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.059 16:14:47 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.059 16:14:47 -- paths/export.sh@5 -- # export PATH 00:24:17.059 16:14:47 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:17.059 16:14:47 -- nvmf/common.sh@46 -- # : 0 00:24:17.059 16:14:47 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:17.059 16:14:47 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:17.059 16:14:47 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:17.059 16:14:47 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:17.059 16:14:47 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:17.059 16:14:47 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:17.059 16:14:47 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:17.059 16:14:47 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:17.059 16:14:47 -- host/aer.sh@11 -- # nvmftestinit 00:24:17.059 16:14:47 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:17.059 16:14:47 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:17.059 16:14:47 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:17.059 16:14:47 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:17.059 16:14:47 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:17.059 16:14:47 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.059 16:14:47 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:17.059 16:14:47 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:17.059 16:14:47 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:17.059 16:14:47 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:17.059 16:14:47 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:17.059 16:14:47 -- common/autotest_common.sh@10 -- # set +x 00:24:23.631 16:14:54 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:23.631 16:14:54 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:23.632 16:14:54 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:23.632 16:14:54 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:23.632 16:14:54 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:23.632 16:14:54 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:23.632 16:14:54 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:23.632 16:14:54 -- nvmf/common.sh@294 -- # net_devs=() 00:24:23.632 16:14:54 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:23.632 16:14:54 -- nvmf/common.sh@295 -- # e810=() 00:24:23.632 16:14:54 -- nvmf/common.sh@295 -- # local -ga e810 00:24:23.632 16:14:54 -- nvmf/common.sh@296 -- # x722=() 00:24:23.632 16:14:54 -- nvmf/common.sh@296 -- # local -ga x722 00:24:23.632 16:14:54 -- nvmf/common.sh@297 -- # mlx=() 00:24:23.632 16:14:54 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:23.632 16:14:54 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:23.632 16:14:54 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:23.632 16:14:54 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:23.632 16:14:54 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:23.632 16:14:54 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:23.632 16:14:54 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:23.632 16:14:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:23.632 16:14:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:23.632 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:23.632 16:14:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:23.632 16:14:54 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:23.632 16:14:54 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:23.632 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:23.632 16:14:54 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:23.632 16:14:54 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:23.632 16:14:54 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:23.632 16:14:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.632 16:14:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:23.632 16:14:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.632 16:14:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:23.632 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:23.632 16:14:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.632 16:14:54 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:23.632 16:14:54 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:23.632 16:14:54 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:23.632 16:14:54 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:23.632 16:14:54 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:23.632 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:23.632 16:14:54 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:23.632 16:14:54 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:23.632 16:14:54 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:23.632 16:14:54 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:23.632 16:14:54 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:23.632 16:14:54 -- nvmf/common.sh@57 -- # uname 00:24:23.632 16:14:54 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:23.632 16:14:54 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:23.632 16:14:54 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:23.632 16:14:54 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:23.632 16:14:54 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:23.632 16:14:54 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:23.632 16:14:54 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:23.632 16:14:54 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:23.632 16:14:54 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:23.632 16:14:54 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:23.632 16:14:54 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:23.632 16:14:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:23.632 16:14:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:23.632 16:14:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:23.632 16:14:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:23.632 16:14:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:23.632 16:14:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:23.632 16:14:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.632 16:14:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:23.632 16:14:54 -- nvmf/common.sh@104 -- # continue 2 00:24:23.632 16:14:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:23.632 16:14:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.632 16:14:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.632 16:14:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:23.632 16:14:54 -- nvmf/common.sh@104 -- # continue 2 00:24:23.632 16:14:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:23.632 16:14:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:23.632 16:14:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:23.632 16:14:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:23.632 16:14:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:23.632 16:14:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:23.632 16:14:54 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:23.632 16:14:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:23.632 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:23.632 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:23.632 altname enp217s0f0np0 00:24:23.632 altname ens818f0np0 00:24:23.632 inet 192.168.100.8/24 scope global mlx_0_0 00:24:23.632 valid_lft forever preferred_lft forever 00:24:23.632 16:14:54 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:23.632 16:14:54 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:23.632 16:14:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:23.632 16:14:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:23.632 16:14:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:23.632 16:14:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:23.632 16:14:54 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:23.632 16:14:54 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:23.632 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:23.632 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:23.632 altname enp217s0f1np1 00:24:23.632 altname ens818f1np1 00:24:23.632 inet 192.168.100.9/24 scope global mlx_0_1 00:24:23.632 valid_lft forever preferred_lft forever 00:24:23.632 16:14:54 -- nvmf/common.sh@410 -- # return 0 00:24:23.632 16:14:54 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:23.632 16:14:54 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:23.632 16:14:54 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:23.632 16:14:54 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:23.632 16:14:54 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:23.633 16:14:54 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:23.633 16:14:54 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:23.633 16:14:54 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:23.633 16:14:54 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:23.633 16:14:54 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:23.633 16:14:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:23.633 16:14:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.633 16:14:54 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:23.633 16:14:54 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:23.633 16:14:54 -- nvmf/common.sh@104 -- # continue 2 00:24:23.633 16:14:54 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:23.633 16:14:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.633 16:14:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:23.633 16:14:54 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:23.633 16:14:54 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:23.633 16:14:54 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:23.633 16:14:54 -- nvmf/common.sh@104 -- # continue 2 00:24:23.633 16:14:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:23.633 16:14:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:23.633 16:14:54 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:23.633 16:14:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:23.633 16:14:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:23.633 16:14:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:23.633 16:14:54 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:23.633 16:14:54 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:23.633 16:14:54 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:23.633 16:14:54 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:23.633 16:14:54 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:23.633 16:14:54 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:23.633 16:14:54 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:23.633 192.168.100.9' 00:24:23.633 16:14:54 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:23.633 192.168.100.9' 00:24:23.633 16:14:54 -- nvmf/common.sh@445 -- # head -n 1 00:24:23.633 16:14:54 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:23.633 16:14:54 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:23.633 192.168.100.9' 00:24:23.633 16:14:54 -- nvmf/common.sh@446 -- # tail -n +2 00:24:23.633 16:14:54 -- nvmf/common.sh@446 -- # head -n 1 00:24:23.633 16:14:54 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:23.633 16:14:54 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:23.633 16:14:54 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:23.633 16:14:54 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:23.633 16:14:54 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:23.633 16:14:54 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:23.633 16:14:54 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:23.633 16:14:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:23.633 16:14:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:23.633 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:24:23.633 16:14:54 -- nvmf/common.sh@469 -- # nvmfpid=1438586 00:24:23.633 16:14:54 -- nvmf/common.sh@470 -- # waitforlisten 1438586 00:24:23.633 16:14:54 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:23.633 16:14:54 -- common/autotest_common.sh@829 -- # '[' -z 1438586 ']' 00:24:23.633 16:14:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:23.633 16:14:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:23.633 16:14:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:23.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:23.633 16:14:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:23.633 16:14:54 -- common/autotest_common.sh@10 -- # set +x 00:24:23.633 [2024-11-20 16:14:54.330956] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:23.633 [2024-11-20 16:14:54.331006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:23.633 EAL: No free 2048 kB hugepages reported on node 1 00:24:23.633 [2024-11-20 16:14:54.401886] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:23.893 [2024-11-20 16:14:54.439595] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:23.893 [2024-11-20 16:14:54.439703] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:23.893 [2024-11-20 16:14:54.439713] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:23.893 [2024-11-20 16:14:54.439721] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:23.893 [2024-11-20 16:14:54.439773] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:23.893 [2024-11-20 16:14:54.439870] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:23.893 [2024-11-20 16:14:54.439955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:23.893 [2024-11-20 16:14:54.439957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:24.464 16:14:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:24.464 16:14:55 -- common/autotest_common.sh@862 -- # return 0 00:24:24.464 16:14:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:24.464 16:14:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:24.464 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:24.464 16:14:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:24.464 16:14:55 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:24:24.464 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.464 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:24.464 [2024-11-20 16:14:55.223101] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xe350d0/0xe395a0) succeed. 00:24:24.464 [2024-11-20 16:14:55.232243] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xe36670/0xe7ac40) succeed. 00:24:24.750 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.750 16:14:55 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:24.750 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.750 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:24.750 Malloc0 00:24:24.750 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.750 16:14:55 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:24.750 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.750 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:24.750 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.750 16:14:55 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:24.750 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.750 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:24.750 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.750 16:14:55 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:24:24.750 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.750 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:24.750 [2024-11-20 16:14:55.402160] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:24.750 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.750 16:14:55 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:24.750 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:24.750 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:24.750 [2024-11-20 16:14:55.409773] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:24:24.750 [ 00:24:24.750 { 00:24:24.750 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:24.750 "subtype": "Discovery", 00:24:24.750 "listen_addresses": [], 00:24:24.750 "allow_any_host": true, 00:24:24.750 "hosts": [] 00:24:24.750 }, 00:24:24.750 { 00:24:24.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:24.750 "subtype": "NVMe", 00:24:24.750 "listen_addresses": [ 00:24:24.750 { 00:24:24.750 "transport": "RDMA", 00:24:24.750 "trtype": "RDMA", 00:24:24.750 "adrfam": "IPv4", 00:24:24.750 "traddr": "192.168.100.8", 00:24:24.750 "trsvcid": "4420" 00:24:24.750 } 00:24:24.750 ], 00:24:24.750 "allow_any_host": true, 00:24:24.750 "hosts": [], 00:24:24.750 "serial_number": "SPDK00000000000001", 00:24:24.750 "model_number": "SPDK bdev Controller", 00:24:24.750 "max_namespaces": 2, 00:24:24.750 "min_cntlid": 1, 00:24:24.750 "max_cntlid": 65519, 00:24:24.750 "namespaces": [ 00:24:24.750 { 00:24:24.750 "nsid": 1, 00:24:24.750 "bdev_name": "Malloc0", 00:24:24.750 "name": "Malloc0", 00:24:24.750 "nguid": "30CE2AC5E5264CAE8D04E312CC8C10FC", 00:24:24.750 "uuid": "30ce2ac5-e526-4cae-8d04-e312cc8c10fc" 00:24:24.750 } 00:24:24.750 ] 00:24:24.750 } 00:24:24.750 ] 00:24:24.750 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:24.750 16:14:55 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:24.750 16:14:55 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:24.750 16:14:55 -- host/aer.sh@33 -- # aerpid=1438859 00:24:24.750 16:14:55 -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:24.750 16:14:55 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:24.750 16:14:55 -- common/autotest_common.sh@1254 -- # local i=0 00:24:24.750 16:14:55 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:24.750 16:14:55 -- common/autotest_common.sh@1256 -- # '[' 0 -lt 200 ']' 00:24:24.750 16:14:55 -- common/autotest_common.sh@1257 -- # i=1 00:24:24.750 16:14:55 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:24.750 EAL: No free 2048 kB hugepages reported on node 1 00:24:24.750 16:14:55 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:24.750 16:14:55 -- common/autotest_common.sh@1256 -- # '[' 1 -lt 200 ']' 00:24:24.750 16:14:55 -- common/autotest_common.sh@1257 -- # i=2 00:24:24.750 16:14:55 -- common/autotest_common.sh@1258 -- # sleep 0.1 00:24:25.010 16:14:55 -- common/autotest_common.sh@1255 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.010 16:14:55 -- common/autotest_common.sh@1261 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:25.010 16:14:55 -- common/autotest_common.sh@1265 -- # return 0 00:24:25.010 16:14:55 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:25.010 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.010 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.010 Malloc1 00:24:25.010 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.010 16:14:55 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:25.010 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.010 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.010 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.010 16:14:55 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:25.010 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.010 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.010 [ 00:24:25.010 { 00:24:25.010 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:25.010 "subtype": "Discovery", 00:24:25.010 "listen_addresses": [], 00:24:25.010 "allow_any_host": true, 00:24:25.010 "hosts": [] 00:24:25.010 }, 00:24:25.010 { 00:24:25.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.010 "subtype": "NVMe", 00:24:25.010 "listen_addresses": [ 00:24:25.010 { 00:24:25.010 "transport": "RDMA", 00:24:25.010 "trtype": "RDMA", 00:24:25.010 "adrfam": "IPv4", 00:24:25.010 "traddr": "192.168.100.8", 00:24:25.010 "trsvcid": "4420" 00:24:25.010 } 00:24:25.010 ], 00:24:25.010 "allow_any_host": true, 00:24:25.010 "hosts": [], 00:24:25.010 "serial_number": "SPDK00000000000001", 00:24:25.010 "model_number": "SPDK bdev Controller", 00:24:25.010 "max_namespaces": 2, 00:24:25.010 "min_cntlid": 1, 00:24:25.010 "max_cntlid": 65519, 00:24:25.010 "namespaces": [ 00:24:25.010 { 00:24:25.010 "nsid": 1, 00:24:25.010 "bdev_name": "Malloc0", 00:24:25.010 "name": "Malloc0", 00:24:25.010 "nguid": "30CE2AC5E5264CAE8D04E312CC8C10FC", 00:24:25.010 "uuid": "30ce2ac5-e526-4cae-8d04-e312cc8c10fc" 00:24:25.010 }, 00:24:25.010 { 00:24:25.010 "nsid": 2, 00:24:25.010 "bdev_name": "Malloc1", 00:24:25.010 "name": "Malloc1", 00:24:25.010 "nguid": "23688B60FAC54D81A9398C99ACAAA4C8", 00:24:25.010 "uuid": "23688b60-fac5-4d81-a939-8c99acaaa4c8" 00:24:25.010 } 00:24:25.010 ] 00:24:25.010 } 00:24:25.010 ] 00:24:25.010 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.010 16:14:55 -- host/aer.sh@43 -- # wait 1438859 00:24:25.010 Asynchronous Event Request test 00:24:25.010 Attaching to 192.168.100.8 00:24:25.010 Attached to 192.168.100.8 00:24:25.010 Registering asynchronous event callbacks... 00:24:25.010 Starting namespace attribute notice tests for all controllers... 00:24:25.010 192.168.100.8: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:25.010 aer_cb - Changed Namespace 00:24:25.010 Cleaning up... 00:24:25.010 16:14:55 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:25.010 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.010 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.010 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.010 16:14:55 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:25.010 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.010 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.010 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.010 16:14:55 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:25.010 16:14:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.010 16:14:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.010 16:14:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.010 16:14:55 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:25.010 16:14:55 -- host/aer.sh@51 -- # nvmftestfini 00:24:25.010 16:14:55 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:25.010 16:14:55 -- nvmf/common.sh@116 -- # sync 00:24:25.010 16:14:55 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:25.011 16:14:55 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:25.011 16:14:55 -- nvmf/common.sh@119 -- # set +e 00:24:25.011 16:14:55 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:25.011 16:14:55 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:25.011 rmmod nvme_rdma 00:24:25.270 rmmod nvme_fabrics 00:24:25.270 16:14:55 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:25.270 16:14:55 -- nvmf/common.sh@123 -- # set -e 00:24:25.270 16:14:55 -- nvmf/common.sh@124 -- # return 0 00:24:25.270 16:14:55 -- nvmf/common.sh@477 -- # '[' -n 1438586 ']' 00:24:25.270 16:14:55 -- nvmf/common.sh@478 -- # killprocess 1438586 00:24:25.270 16:14:55 -- common/autotest_common.sh@936 -- # '[' -z 1438586 ']' 00:24:25.270 16:14:55 -- common/autotest_common.sh@940 -- # kill -0 1438586 00:24:25.270 16:14:55 -- common/autotest_common.sh@941 -- # uname 00:24:25.270 16:14:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:25.270 16:14:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1438586 00:24:25.270 16:14:55 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:25.270 16:14:55 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:25.271 16:14:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1438586' 00:24:25.271 killing process with pid 1438586 00:24:25.271 16:14:55 -- common/autotest_common.sh@955 -- # kill 1438586 00:24:25.271 [2024-11-20 16:14:55.913299] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:24:25.271 16:14:55 -- common/autotest_common.sh@960 -- # wait 1438586 00:24:25.530 16:14:56 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:25.530 16:14:56 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:25.530 00:24:25.530 real 0m8.751s 00:24:25.530 user 0m8.711s 00:24:25.530 sys 0m5.660s 00:24:25.530 16:14:56 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:25.530 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:24:25.530 ************************************ 00:24:25.530 END TEST nvmf_aer 00:24:25.530 ************************************ 00:24:25.530 16:14:56 -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:25.530 16:14:56 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:25.530 16:14:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:25.530 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:24:25.530 ************************************ 00:24:25.530 START TEST nvmf_async_init 00:24:25.530 ************************************ 00:24:25.530 16:14:56 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=rdma 00:24:25.530 * Looking for test storage... 00:24:25.530 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:25.530 16:14:56 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:25.530 16:14:56 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:25.530 16:14:56 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:25.790 16:14:56 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:25.790 16:14:56 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:25.790 16:14:56 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:25.790 16:14:56 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:25.790 16:14:56 -- scripts/common.sh@335 -- # IFS=.-: 00:24:25.790 16:14:56 -- scripts/common.sh@335 -- # read -ra ver1 00:24:25.790 16:14:56 -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.790 16:14:56 -- scripts/common.sh@336 -- # read -ra ver2 00:24:25.790 16:14:56 -- scripts/common.sh@337 -- # local 'op=<' 00:24:25.790 16:14:56 -- scripts/common.sh@339 -- # ver1_l=2 00:24:25.790 16:14:56 -- scripts/common.sh@340 -- # ver2_l=1 00:24:25.790 16:14:56 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:25.790 16:14:56 -- scripts/common.sh@343 -- # case "$op" in 00:24:25.790 16:14:56 -- scripts/common.sh@344 -- # : 1 00:24:25.790 16:14:56 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:25.791 16:14:56 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.791 16:14:56 -- scripts/common.sh@364 -- # decimal 1 00:24:25.791 16:14:56 -- scripts/common.sh@352 -- # local d=1 00:24:25.791 16:14:56 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.791 16:14:56 -- scripts/common.sh@354 -- # echo 1 00:24:25.791 16:14:56 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:25.791 16:14:56 -- scripts/common.sh@365 -- # decimal 2 00:24:25.791 16:14:56 -- scripts/common.sh@352 -- # local d=2 00:24:25.791 16:14:56 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.791 16:14:56 -- scripts/common.sh@354 -- # echo 2 00:24:25.791 16:14:56 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:25.791 16:14:56 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:25.791 16:14:56 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:25.791 16:14:56 -- scripts/common.sh@367 -- # return 0 00:24:25.791 16:14:56 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.791 16:14:56 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:25.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.791 --rc genhtml_branch_coverage=1 00:24:25.791 --rc genhtml_function_coverage=1 00:24:25.791 --rc genhtml_legend=1 00:24:25.791 --rc geninfo_all_blocks=1 00:24:25.791 --rc geninfo_unexecuted_blocks=1 00:24:25.791 00:24:25.791 ' 00:24:25.791 16:14:56 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:25.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.791 --rc genhtml_branch_coverage=1 00:24:25.791 --rc genhtml_function_coverage=1 00:24:25.791 --rc genhtml_legend=1 00:24:25.791 --rc geninfo_all_blocks=1 00:24:25.791 --rc geninfo_unexecuted_blocks=1 00:24:25.791 00:24:25.791 ' 00:24:25.791 16:14:56 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:25.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.791 --rc genhtml_branch_coverage=1 00:24:25.791 --rc genhtml_function_coverage=1 00:24:25.791 --rc genhtml_legend=1 00:24:25.791 --rc geninfo_all_blocks=1 00:24:25.791 --rc geninfo_unexecuted_blocks=1 00:24:25.791 00:24:25.791 ' 00:24:25.791 16:14:56 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:25.791 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.791 --rc genhtml_branch_coverage=1 00:24:25.791 --rc genhtml_function_coverage=1 00:24:25.791 --rc genhtml_legend=1 00:24:25.791 --rc geninfo_all_blocks=1 00:24:25.791 --rc geninfo_unexecuted_blocks=1 00:24:25.791 00:24:25.791 ' 00:24:25.791 16:14:56 -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.791 16:14:56 -- nvmf/common.sh@7 -- # uname -s 00:24:25.791 16:14:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.791 16:14:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.791 16:14:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.791 16:14:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.791 16:14:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.791 16:14:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.791 16:14:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.791 16:14:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.791 16:14:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.791 16:14:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.791 16:14:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:25.791 16:14:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:25.791 16:14:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.791 16:14:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.791 16:14:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.791 16:14:56 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:25.791 16:14:56 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.791 16:14:56 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.791 16:14:56 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.791 16:14:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.791 16:14:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.791 16:14:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.791 16:14:56 -- paths/export.sh@5 -- # export PATH 00:24:25.791 16:14:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.791 16:14:56 -- nvmf/common.sh@46 -- # : 0 00:24:25.791 16:14:56 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:25.791 16:14:56 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:25.791 16:14:56 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:25.791 16:14:56 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.791 16:14:56 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.791 16:14:56 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:25.791 16:14:56 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:25.791 16:14:56 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:25.791 16:14:56 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:25.791 16:14:56 -- host/async_init.sh@14 -- # null_block_size=512 00:24:25.791 16:14:56 -- host/async_init.sh@15 -- # null_bdev=null0 00:24:25.791 16:14:56 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:25.791 16:14:56 -- host/async_init.sh@20 -- # uuidgen 00:24:25.791 16:14:56 -- host/async_init.sh@20 -- # tr -d - 00:24:25.791 16:14:56 -- host/async_init.sh@20 -- # nguid=2cec23a54f924b57a998b810f9d7ec75 00:24:25.791 16:14:56 -- host/async_init.sh@22 -- # nvmftestinit 00:24:25.791 16:14:56 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:25.791 16:14:56 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.791 16:14:56 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:25.791 16:14:56 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:25.791 16:14:56 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:25.791 16:14:56 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.791 16:14:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:25.791 16:14:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.791 16:14:56 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:25.791 16:14:56 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:25.791 16:14:56 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:25.791 16:14:56 -- common/autotest_common.sh@10 -- # set +x 00:24:32.365 16:15:02 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:32.365 16:15:02 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:32.365 16:15:02 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:32.365 16:15:02 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:32.365 16:15:02 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:32.365 16:15:02 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:32.365 16:15:02 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:32.365 16:15:02 -- nvmf/common.sh@294 -- # net_devs=() 00:24:32.365 16:15:02 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:32.365 16:15:02 -- nvmf/common.sh@295 -- # e810=() 00:24:32.365 16:15:02 -- nvmf/common.sh@295 -- # local -ga e810 00:24:32.365 16:15:02 -- nvmf/common.sh@296 -- # x722=() 00:24:32.365 16:15:02 -- nvmf/common.sh@296 -- # local -ga x722 00:24:32.365 16:15:02 -- nvmf/common.sh@297 -- # mlx=() 00:24:32.365 16:15:02 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:32.365 16:15:02 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.365 16:15:02 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:32.365 16:15:02 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:32.365 16:15:02 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:32.365 16:15:02 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:32.365 16:15:02 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:32.365 16:15:02 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:32.365 16:15:02 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:32.365 16:15:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:32.365 16:15:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:32.365 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:32.365 16:15:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:32.365 16:15:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:32.365 16:15:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:32.365 16:15:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:32.365 16:15:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:32.365 16:15:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:32.365 16:15:02 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:32.365 16:15:02 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:32.365 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:32.365 16:15:02 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:32.365 16:15:02 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:32.365 16:15:02 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:32.366 16:15:02 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:32.366 16:15:02 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.366 16:15:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:32.366 16:15:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.366 16:15:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:32.366 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:32.366 16:15:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.366 16:15:02 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.366 16:15:02 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:32.366 16:15:02 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.366 16:15:02 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:32.366 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:32.366 16:15:02 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.366 16:15:02 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:32.366 16:15:02 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:32.366 16:15:02 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:32.366 16:15:02 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:32.366 16:15:02 -- nvmf/common.sh@57 -- # uname 00:24:32.366 16:15:02 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:32.366 16:15:02 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:32.366 16:15:02 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:32.366 16:15:02 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:32.366 16:15:02 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:32.366 16:15:02 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:32.366 16:15:02 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:32.366 16:15:02 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:32.366 16:15:02 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:32.366 16:15:02 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:32.366 16:15:02 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:32.366 16:15:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:32.366 16:15:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:32.366 16:15:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:32.366 16:15:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:32.366 16:15:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:32.366 16:15:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:32.366 16:15:02 -- nvmf/common.sh@104 -- # continue 2 00:24:32.366 16:15:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:32.366 16:15:02 -- nvmf/common.sh@104 -- # continue 2 00:24:32.366 16:15:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:32.366 16:15:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:32.366 16:15:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:32.366 16:15:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:32.366 16:15:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:32.366 16:15:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:32.366 16:15:02 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:32.366 16:15:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:32.366 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:32.366 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:32.366 altname enp217s0f0np0 00:24:32.366 altname ens818f0np0 00:24:32.366 inet 192.168.100.8/24 scope global mlx_0_0 00:24:32.366 valid_lft forever preferred_lft forever 00:24:32.366 16:15:02 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:32.366 16:15:02 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:32.366 16:15:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:32.366 16:15:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:32.366 16:15:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:32.366 16:15:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:32.366 16:15:02 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:32.366 16:15:02 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:32.366 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:32.366 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:32.366 altname enp217s0f1np1 00:24:32.366 altname ens818f1np1 00:24:32.366 inet 192.168.100.9/24 scope global mlx_0_1 00:24:32.366 valid_lft forever preferred_lft forever 00:24:32.366 16:15:02 -- nvmf/common.sh@410 -- # return 0 00:24:32.366 16:15:02 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:32.366 16:15:02 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:32.366 16:15:02 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:32.366 16:15:02 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:32.366 16:15:02 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:32.366 16:15:02 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:32.366 16:15:02 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:32.366 16:15:02 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:32.366 16:15:02 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:32.366 16:15:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:32.366 16:15:02 -- nvmf/common.sh@104 -- # continue 2 00:24:32.366 16:15:02 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:32.366 16:15:02 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:32.366 16:15:02 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:32.366 16:15:02 -- nvmf/common.sh@104 -- # continue 2 00:24:32.366 16:15:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:32.366 16:15:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:32.366 16:15:02 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:32.366 16:15:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:32.367 16:15:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:32.367 16:15:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:32.367 16:15:02 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:32.367 16:15:02 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:32.367 16:15:02 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:32.367 16:15:02 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:32.367 16:15:02 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:32.367 16:15:02 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:32.367 16:15:02 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:32.367 192.168.100.9' 00:24:32.367 16:15:02 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:32.367 192.168.100.9' 00:24:32.367 16:15:02 -- nvmf/common.sh@445 -- # head -n 1 00:24:32.367 16:15:02 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:32.367 16:15:02 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:32.367 192.168.100.9' 00:24:32.367 16:15:02 -- nvmf/common.sh@446 -- # tail -n +2 00:24:32.367 16:15:02 -- nvmf/common.sh@446 -- # head -n 1 00:24:32.367 16:15:02 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:32.367 16:15:02 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:32.367 16:15:02 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:32.367 16:15:02 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:32.367 16:15:02 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:32.367 16:15:02 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:32.367 16:15:02 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:32.367 16:15:02 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:32.367 16:15:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:32.367 16:15:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.367 16:15:02 -- nvmf/common.sh@469 -- # nvmfpid=1442427 00:24:32.367 16:15:02 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:32.367 16:15:02 -- nvmf/common.sh@470 -- # waitforlisten 1442427 00:24:32.367 16:15:02 -- common/autotest_common.sh@829 -- # '[' -z 1442427 ']' 00:24:32.367 16:15:02 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.367 16:15:02 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:32.367 16:15:02 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:32.367 16:15:02 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:32.367 16:15:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.367 [2024-11-20 16:15:02.819611] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:32.367 [2024-11-20 16:15:02.819664] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:32.367 EAL: No free 2048 kB hugepages reported on node 1 00:24:32.367 [2024-11-20 16:15:02.889652] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:32.367 [2024-11-20 16:15:02.925715] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:32.367 [2024-11-20 16:15:02.925830] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:32.367 [2024-11-20 16:15:02.925841] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:32.367 [2024-11-20 16:15:02.925850] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:32.367 [2024-11-20 16:15:02.925873] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:32.935 16:15:03 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:32.935 16:15:03 -- common/autotest_common.sh@862 -- # return 0 00:24:32.935 16:15:03 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:32.935 16:15:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:32.935 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:32.935 16:15:03 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.935 16:15:03 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:32.935 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:32.935 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:32.935 [2024-11-20 16:15:03.712193] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1b4c230/0x1b506e0) succeed. 00:24:32.935 [2024-11-20 16:15:03.721078] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1b4d6e0/0x1b91d80) succeed. 00:24:33.194 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.194 16:15:03 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:33.194 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.194 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.194 null0 00:24:33.194 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.194 16:15:03 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:33.194 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.194 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.194 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.194 16:15:03 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:33.194 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.194 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.194 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.194 16:15:03 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 2cec23a54f924b57a998b810f9d7ec75 00:24:33.194 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.194 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.194 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.194 16:15:03 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:33.194 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.194 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.194 [2024-11-20 16:15:03.805363] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:33.194 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.194 16:15:03 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:33.194 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.194 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.194 nvme0n1 00:24:33.194 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.194 16:15:03 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:33.194 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.194 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.194 [ 00:24:33.194 { 00:24:33.194 "name": "nvme0n1", 00:24:33.194 "aliases": [ 00:24:33.194 "2cec23a5-4f92-4b57-a998-b810f9d7ec75" 00:24:33.194 ], 00:24:33.194 "product_name": "NVMe disk", 00:24:33.194 "block_size": 512, 00:24:33.194 "num_blocks": 2097152, 00:24:33.194 "uuid": "2cec23a5-4f92-4b57-a998-b810f9d7ec75", 00:24:33.194 "assigned_rate_limits": { 00:24:33.194 "rw_ios_per_sec": 0, 00:24:33.194 "rw_mbytes_per_sec": 0, 00:24:33.194 "r_mbytes_per_sec": 0, 00:24:33.194 "w_mbytes_per_sec": 0 00:24:33.194 }, 00:24:33.194 "claimed": false, 00:24:33.194 "zoned": false, 00:24:33.194 "supported_io_types": { 00:24:33.194 "read": true, 00:24:33.194 "write": true, 00:24:33.194 "unmap": false, 00:24:33.194 "write_zeroes": true, 00:24:33.194 "flush": true, 00:24:33.194 "reset": true, 00:24:33.194 "compare": true, 00:24:33.194 "compare_and_write": true, 00:24:33.194 "abort": true, 00:24:33.194 "nvme_admin": true, 00:24:33.194 "nvme_io": true 00:24:33.194 }, 00:24:33.194 "memory_domains": [ 00:24:33.194 { 00:24:33.194 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:33.194 "dma_device_type": 0 00:24:33.194 } 00:24:33.194 ], 00:24:33.194 "driver_specific": { 00:24:33.194 "nvme": [ 00:24:33.194 { 00:24:33.194 "trid": { 00:24:33.194 "trtype": "RDMA", 00:24:33.194 "adrfam": "IPv4", 00:24:33.194 "traddr": "192.168.100.8", 00:24:33.194 "trsvcid": "4420", 00:24:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:33.194 }, 00:24:33.194 "ctrlr_data": { 00:24:33.194 "cntlid": 1, 00:24:33.194 "vendor_id": "0x8086", 00:24:33.194 "model_number": "SPDK bdev Controller", 00:24:33.194 "serial_number": "00000000000000000000", 00:24:33.194 "firmware_revision": "24.01.1", 00:24:33.194 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:33.194 "oacs": { 00:24:33.194 "security": 0, 00:24:33.194 "format": 0, 00:24:33.194 "firmware": 0, 00:24:33.194 "ns_manage": 0 00:24:33.194 }, 00:24:33.195 "multi_ctrlr": true, 00:24:33.195 "ana_reporting": false 00:24:33.195 }, 00:24:33.195 "vs": { 00:24:33.195 "nvme_version": "1.3" 00:24:33.195 }, 00:24:33.195 "ns_data": { 00:24:33.195 "id": 1, 00:24:33.195 "can_share": true 00:24:33.195 } 00:24:33.195 } 00:24:33.195 ], 00:24:33.195 "mp_policy": "active_passive" 00:24:33.195 } 00:24:33.195 } 00:24:33.195 ] 00:24:33.195 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.195 16:15:03 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:33.195 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.195 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.195 [2024-11-20 16:15:03.920191] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:33.195 [2024-11-20 16:15:03.938109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:24:33.195 [2024-11-20 16:15:03.959831] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:33.195 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.195 16:15:03 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:33.195 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.195 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.195 [ 00:24:33.195 { 00:24:33.195 "name": "nvme0n1", 00:24:33.195 "aliases": [ 00:24:33.195 "2cec23a5-4f92-4b57-a998-b810f9d7ec75" 00:24:33.195 ], 00:24:33.195 "product_name": "NVMe disk", 00:24:33.195 "block_size": 512, 00:24:33.195 "num_blocks": 2097152, 00:24:33.195 "uuid": "2cec23a5-4f92-4b57-a998-b810f9d7ec75", 00:24:33.195 "assigned_rate_limits": { 00:24:33.195 "rw_ios_per_sec": 0, 00:24:33.195 "rw_mbytes_per_sec": 0, 00:24:33.195 "r_mbytes_per_sec": 0, 00:24:33.195 "w_mbytes_per_sec": 0 00:24:33.195 }, 00:24:33.195 "claimed": false, 00:24:33.195 "zoned": false, 00:24:33.195 "supported_io_types": { 00:24:33.195 "read": true, 00:24:33.195 "write": true, 00:24:33.195 "unmap": false, 00:24:33.195 "write_zeroes": true, 00:24:33.195 "flush": true, 00:24:33.195 "reset": true, 00:24:33.195 "compare": true, 00:24:33.195 "compare_and_write": true, 00:24:33.195 "abort": true, 00:24:33.195 "nvme_admin": true, 00:24:33.195 "nvme_io": true 00:24:33.195 }, 00:24:33.195 "memory_domains": [ 00:24:33.195 { 00:24:33.195 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:33.195 "dma_device_type": 0 00:24:33.195 } 00:24:33.195 ], 00:24:33.195 "driver_specific": { 00:24:33.195 "nvme": [ 00:24:33.195 { 00:24:33.195 "trid": { 00:24:33.195 "trtype": "RDMA", 00:24:33.195 "adrfam": "IPv4", 00:24:33.195 "traddr": "192.168.100.8", 00:24:33.195 "trsvcid": "4420", 00:24:33.195 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:33.195 }, 00:24:33.195 "ctrlr_data": { 00:24:33.195 "cntlid": 2, 00:24:33.195 "vendor_id": "0x8086", 00:24:33.195 "model_number": "SPDK bdev Controller", 00:24:33.195 "serial_number": "00000000000000000000", 00:24:33.195 "firmware_revision": "24.01.1", 00:24:33.195 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:33.195 "oacs": { 00:24:33.195 "security": 0, 00:24:33.195 "format": 0, 00:24:33.195 "firmware": 0, 00:24:33.195 "ns_manage": 0 00:24:33.195 }, 00:24:33.195 "multi_ctrlr": true, 00:24:33.195 "ana_reporting": false 00:24:33.195 }, 00:24:33.195 "vs": { 00:24:33.195 "nvme_version": "1.3" 00:24:33.195 }, 00:24:33.195 "ns_data": { 00:24:33.195 "id": 1, 00:24:33.195 "can_share": true 00:24:33.195 } 00:24:33.195 } 00:24:33.195 ], 00:24:33.195 "mp_policy": "active_passive" 00:24:33.195 } 00:24:33.195 } 00:24:33.195 ] 00:24:33.195 16:15:03 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.195 16:15:03 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.195 16:15:03 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.195 16:15:03 -- common/autotest_common.sh@10 -- # set +x 00:24:33.455 16:15:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.455 16:15:04 -- host/async_init.sh@53 -- # mktemp 00:24:33.455 16:15:04 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.VKoYiQjMWU 00:24:33.455 16:15:04 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:33.455 16:15:04 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.VKoYiQjMWU 00:24:33.455 16:15:04 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:33.455 16:15:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.455 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.455 16:15:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.455 16:15:04 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4421 --secure-channel 00:24:33.455 16:15:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.455 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.455 [2024-11-20 16:15:04.034956] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:24:33.455 16:15:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.455 16:15:04 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VKoYiQjMWU 00:24:33.455 16:15:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.455 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.455 16:15:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.455 16:15:04 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t rdma -a 192.168.100.8 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VKoYiQjMWU 00:24:33.455 16:15:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.455 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.455 [2024-11-20 16:15:04.050981] bdev_nvme_rpc.c: 477:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:33.455 nvme0n1 00:24:33.455 16:15:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.455 16:15:04 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:33.455 16:15:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.455 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.455 [ 00:24:33.455 { 00:24:33.455 "name": "nvme0n1", 00:24:33.455 "aliases": [ 00:24:33.455 "2cec23a5-4f92-4b57-a998-b810f9d7ec75" 00:24:33.455 ], 00:24:33.455 "product_name": "NVMe disk", 00:24:33.455 "block_size": 512, 00:24:33.455 "num_blocks": 2097152, 00:24:33.455 "uuid": "2cec23a5-4f92-4b57-a998-b810f9d7ec75", 00:24:33.455 "assigned_rate_limits": { 00:24:33.455 "rw_ios_per_sec": 0, 00:24:33.455 "rw_mbytes_per_sec": 0, 00:24:33.455 "r_mbytes_per_sec": 0, 00:24:33.455 "w_mbytes_per_sec": 0 00:24:33.455 }, 00:24:33.455 "claimed": false, 00:24:33.455 "zoned": false, 00:24:33.455 "supported_io_types": { 00:24:33.455 "read": true, 00:24:33.455 "write": true, 00:24:33.455 "unmap": false, 00:24:33.455 "write_zeroes": true, 00:24:33.455 "flush": true, 00:24:33.455 "reset": true, 00:24:33.455 "compare": true, 00:24:33.455 "compare_and_write": true, 00:24:33.455 "abort": true, 00:24:33.455 "nvme_admin": true, 00:24:33.455 "nvme_io": true 00:24:33.455 }, 00:24:33.455 "memory_domains": [ 00:24:33.455 { 00:24:33.455 "dma_device_id": "SPDK_RDMA_DMA_DEVICE", 00:24:33.455 "dma_device_type": 0 00:24:33.455 } 00:24:33.455 ], 00:24:33.455 "driver_specific": { 00:24:33.455 "nvme": [ 00:24:33.455 { 00:24:33.455 "trid": { 00:24:33.455 "trtype": "RDMA", 00:24:33.455 "adrfam": "IPv4", 00:24:33.455 "traddr": "192.168.100.8", 00:24:33.455 "trsvcid": "4421", 00:24:33.455 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:33.455 }, 00:24:33.455 "ctrlr_data": { 00:24:33.455 "cntlid": 3, 00:24:33.455 "vendor_id": "0x8086", 00:24:33.455 "model_number": "SPDK bdev Controller", 00:24:33.455 "serial_number": "00000000000000000000", 00:24:33.455 "firmware_revision": "24.01.1", 00:24:33.455 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:33.455 "oacs": { 00:24:33.455 "security": 0, 00:24:33.455 "format": 0, 00:24:33.455 "firmware": 0, 00:24:33.455 "ns_manage": 0 00:24:33.455 }, 00:24:33.455 "multi_ctrlr": true, 00:24:33.455 "ana_reporting": false 00:24:33.455 }, 00:24:33.455 "vs": { 00:24:33.455 "nvme_version": "1.3" 00:24:33.455 }, 00:24:33.455 "ns_data": { 00:24:33.455 "id": 1, 00:24:33.455 "can_share": true 00:24:33.455 } 00:24:33.455 } 00:24:33.455 ], 00:24:33.455 "mp_policy": "active_passive" 00:24:33.455 } 00:24:33.455 } 00:24:33.455 ] 00:24:33.455 16:15:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.455 16:15:04 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:33.455 16:15:04 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.455 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.455 16:15:04 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.455 16:15:04 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.VKoYiQjMWU 00:24:33.455 16:15:04 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:33.455 16:15:04 -- host/async_init.sh@78 -- # nvmftestfini 00:24:33.455 16:15:04 -- nvmf/common.sh@476 -- # nvmfcleanup 00:24:33.455 16:15:04 -- nvmf/common.sh@116 -- # sync 00:24:33.455 16:15:04 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:24:33.455 16:15:04 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:24:33.455 16:15:04 -- nvmf/common.sh@119 -- # set +e 00:24:33.455 16:15:04 -- nvmf/common.sh@120 -- # for i in {1..20} 00:24:33.455 16:15:04 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:24:33.455 rmmod nvme_rdma 00:24:33.455 rmmod nvme_fabrics 00:24:33.455 16:15:04 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:24:33.455 16:15:04 -- nvmf/common.sh@123 -- # set -e 00:24:33.455 16:15:04 -- nvmf/common.sh@124 -- # return 0 00:24:33.455 16:15:04 -- nvmf/common.sh@477 -- # '[' -n 1442427 ']' 00:24:33.455 16:15:04 -- nvmf/common.sh@478 -- # killprocess 1442427 00:24:33.455 16:15:04 -- common/autotest_common.sh@936 -- # '[' -z 1442427 ']' 00:24:33.455 16:15:04 -- common/autotest_common.sh@940 -- # kill -0 1442427 00:24:33.455 16:15:04 -- common/autotest_common.sh@941 -- # uname 00:24:33.456 16:15:04 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:33.456 16:15:04 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1442427 00:24:33.714 16:15:04 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:33.714 16:15:04 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:33.714 16:15:04 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1442427' 00:24:33.714 killing process with pid 1442427 00:24:33.714 16:15:04 -- common/autotest_common.sh@955 -- # kill 1442427 00:24:33.714 16:15:04 -- common/autotest_common.sh@960 -- # wait 1442427 00:24:33.715 16:15:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:24:33.715 16:15:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:24:33.715 00:24:33.715 real 0m8.281s 00:24:33.715 user 0m3.667s 00:24:33.715 sys 0m5.345s 00:24:33.715 16:15:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:24:33.715 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.715 ************************************ 00:24:33.715 END TEST nvmf_async_init 00:24:33.715 ************************************ 00:24:33.974 16:15:04 -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:33.974 16:15:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:33.974 16:15:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:33.974 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:24:33.974 ************************************ 00:24:33.974 START TEST dma 00:24:33.974 ************************************ 00:24:33.974 16:15:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=rdma 00:24:33.974 * Looking for test storage... 00:24:33.974 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:24:33.974 16:15:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:24:33.974 16:15:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:24:33.974 16:15:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:24:33.974 16:15:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:24:33.974 16:15:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:24:33.974 16:15:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:24:33.974 16:15:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:24:33.974 16:15:04 -- scripts/common.sh@335 -- # IFS=.-: 00:24:33.975 16:15:04 -- scripts/common.sh@335 -- # read -ra ver1 00:24:33.975 16:15:04 -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.975 16:15:04 -- scripts/common.sh@336 -- # read -ra ver2 00:24:33.975 16:15:04 -- scripts/common.sh@337 -- # local 'op=<' 00:24:33.975 16:15:04 -- scripts/common.sh@339 -- # ver1_l=2 00:24:33.975 16:15:04 -- scripts/common.sh@340 -- # ver2_l=1 00:24:33.975 16:15:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:24:33.975 16:15:04 -- scripts/common.sh@343 -- # case "$op" in 00:24:33.975 16:15:04 -- scripts/common.sh@344 -- # : 1 00:24:33.975 16:15:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:24:33.975 16:15:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.975 16:15:04 -- scripts/common.sh@364 -- # decimal 1 00:24:33.975 16:15:04 -- scripts/common.sh@352 -- # local d=1 00:24:33.975 16:15:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.975 16:15:04 -- scripts/common.sh@354 -- # echo 1 00:24:33.975 16:15:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:24:33.975 16:15:04 -- scripts/common.sh@365 -- # decimal 2 00:24:33.975 16:15:04 -- scripts/common.sh@352 -- # local d=2 00:24:33.975 16:15:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.975 16:15:04 -- scripts/common.sh@354 -- # echo 2 00:24:33.975 16:15:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:24:33.975 16:15:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:24:33.975 16:15:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:24:33.975 16:15:04 -- scripts/common.sh@367 -- # return 0 00:24:33.975 16:15:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.975 16:15:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:24:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.975 --rc genhtml_branch_coverage=1 00:24:33.975 --rc genhtml_function_coverage=1 00:24:33.975 --rc genhtml_legend=1 00:24:33.975 --rc geninfo_all_blocks=1 00:24:33.975 --rc geninfo_unexecuted_blocks=1 00:24:33.975 00:24:33.975 ' 00:24:33.975 16:15:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:24:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.975 --rc genhtml_branch_coverage=1 00:24:33.975 --rc genhtml_function_coverage=1 00:24:33.975 --rc genhtml_legend=1 00:24:33.975 --rc geninfo_all_blocks=1 00:24:33.975 --rc geninfo_unexecuted_blocks=1 00:24:33.975 00:24:33.975 ' 00:24:33.975 16:15:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:24:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.975 --rc genhtml_branch_coverage=1 00:24:33.975 --rc genhtml_function_coverage=1 00:24:33.975 --rc genhtml_legend=1 00:24:33.975 --rc geninfo_all_blocks=1 00:24:33.975 --rc geninfo_unexecuted_blocks=1 00:24:33.975 00:24:33.975 ' 00:24:33.975 16:15:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:24:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.975 --rc genhtml_branch_coverage=1 00:24:33.975 --rc genhtml_function_coverage=1 00:24:33.975 --rc genhtml_legend=1 00:24:33.975 --rc geninfo_all_blocks=1 00:24:33.975 --rc geninfo_unexecuted_blocks=1 00:24:33.975 00:24:33.975 ' 00:24:33.975 16:15:04 -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.975 16:15:04 -- nvmf/common.sh@7 -- # uname -s 00:24:33.975 16:15:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.975 16:15:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.975 16:15:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.975 16:15:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.975 16:15:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.975 16:15:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.975 16:15:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.975 16:15:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.975 16:15:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.975 16:15:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.975 16:15:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:24:33.975 16:15:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:24:33.975 16:15:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.975 16:15:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.975 16:15:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.975 16:15:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:24:33.975 16:15:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.975 16:15:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.975 16:15:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.975 16:15:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.975 16:15:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.975 16:15:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.975 16:15:04 -- paths/export.sh@5 -- # export PATH 00:24:33.975 16:15:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.975 16:15:04 -- nvmf/common.sh@46 -- # : 0 00:24:33.975 16:15:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:24:33.975 16:15:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:24:33.975 16:15:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:24:33.975 16:15:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.975 16:15:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.975 16:15:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:24:33.975 16:15:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:24:33.975 16:15:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:24:33.975 16:15:04 -- host/dma.sh@12 -- # '[' rdma '!=' rdma ']' 00:24:33.975 16:15:04 -- host/dma.sh@16 -- # MALLOC_BDEV_SIZE=256 00:24:33.975 16:15:04 -- host/dma.sh@17 -- # MALLOC_BLOCK_SIZE=512 00:24:33.975 16:15:04 -- host/dma.sh@18 -- # subsystem=0 00:24:33.975 16:15:04 -- host/dma.sh@93 -- # nvmftestinit 00:24:33.975 16:15:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:24:33.975 16:15:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.975 16:15:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:24:33.975 16:15:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:24:33.975 16:15:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:24:33.975 16:15:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.975 16:15:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:33.975 16:15:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.975 16:15:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:24:33.975 16:15:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:24:33.975 16:15:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:24:33.975 16:15:04 -- common/autotest_common.sh@10 -- # set +x 00:24:40.645 16:15:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:24:40.645 16:15:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:24:40.645 16:15:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:24:40.645 16:15:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:24:40.645 16:15:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:24:40.645 16:15:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:24:40.645 16:15:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:24:40.645 16:15:11 -- nvmf/common.sh@294 -- # net_devs=() 00:24:40.645 16:15:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:24:40.645 16:15:11 -- nvmf/common.sh@295 -- # e810=() 00:24:40.645 16:15:11 -- nvmf/common.sh@295 -- # local -ga e810 00:24:40.645 16:15:11 -- nvmf/common.sh@296 -- # x722=() 00:24:40.645 16:15:11 -- nvmf/common.sh@296 -- # local -ga x722 00:24:40.645 16:15:11 -- nvmf/common.sh@297 -- # mlx=() 00:24:40.645 16:15:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:24:40.645 16:15:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.645 16:15:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:24:40.645 16:15:11 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:24:40.645 16:15:11 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:24:40.645 16:15:11 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:24:40.645 16:15:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:24:40.645 16:15:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.645 16:15:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:24:40.645 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:24:40.645 16:15:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:40.645 16:15:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:24:40.645 16:15:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:24:40.645 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:24:40.645 16:15:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:24:40.645 16:15:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:24:40.645 16:15:11 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.645 16:15:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.645 16:15:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.645 16:15:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.645 16:15:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:24:40.645 Found net devices under 0000:d9:00.0: mlx_0_0 00:24:40.645 16:15:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.645 16:15:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:24:40.645 16:15:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.645 16:15:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:24:40.645 16:15:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.645 16:15:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:24:40.645 Found net devices under 0000:d9:00.1: mlx_0_1 00:24:40.645 16:15:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.645 16:15:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:24:40.645 16:15:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:24:40.645 16:15:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:24:40.645 16:15:11 -- nvmf/common.sh@408 -- # rdma_device_init 00:24:40.645 16:15:11 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:24:40.645 16:15:11 -- nvmf/common.sh@57 -- # uname 00:24:40.645 16:15:11 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:24:40.645 16:15:11 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:24:40.645 16:15:11 -- nvmf/common.sh@62 -- # modprobe ib_core 00:24:40.645 16:15:11 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:24:40.906 16:15:11 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:24:40.906 16:15:11 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:24:40.906 16:15:11 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:24:40.906 16:15:11 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:24:40.906 16:15:11 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:24:40.906 16:15:11 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:24:40.906 16:15:11 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:24:40.906 16:15:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:40.906 16:15:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:40.906 16:15:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:40.906 16:15:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:40.906 16:15:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:40.906 16:15:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:40.906 16:15:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:40.906 16:15:11 -- nvmf/common.sh@104 -- # continue 2 00:24:40.906 16:15:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:40.906 16:15:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:40.906 16:15:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:40.906 16:15:11 -- nvmf/common.sh@104 -- # continue 2 00:24:40.906 16:15:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:40.906 16:15:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:24:40.906 16:15:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.906 16:15:11 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:24:40.906 16:15:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:24:40.906 16:15:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:24:40.906 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:40.906 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:24:40.906 altname enp217s0f0np0 00:24:40.906 altname ens818f0np0 00:24:40.906 inet 192.168.100.8/24 scope global mlx_0_0 00:24:40.906 valid_lft forever preferred_lft forever 00:24:40.906 16:15:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:24:40.906 16:15:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:24:40.906 16:15:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.906 16:15:11 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:24:40.906 16:15:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:24:40.906 16:15:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:24:40.906 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:24:40.906 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:24:40.906 altname enp217s0f1np1 00:24:40.906 altname ens818f1np1 00:24:40.906 inet 192.168.100.9/24 scope global mlx_0_1 00:24:40.906 valid_lft forever preferred_lft forever 00:24:40.906 16:15:11 -- nvmf/common.sh@410 -- # return 0 00:24:40.906 16:15:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:24:40.906 16:15:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:24:40.906 16:15:11 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:24:40.906 16:15:11 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:24:40.906 16:15:11 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:24:40.906 16:15:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:24:40.906 16:15:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:24:40.906 16:15:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:24:40.906 16:15:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:24:40.906 16:15:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:24:40.906 16:15:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:24:40.906 16:15:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:24:40.906 16:15:11 -- nvmf/common.sh@104 -- # continue 2 00:24:40.906 16:15:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:24:40.906 16:15:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:24:40.906 16:15:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:24:40.906 16:15:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:24:40.906 16:15:11 -- nvmf/common.sh@104 -- # continue 2 00:24:40.906 16:15:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:40.906 16:15:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:24:40.906 16:15:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.906 16:15:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:24:40.906 16:15:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:24:40.906 16:15:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:24:40.906 16:15:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:24:40.906 16:15:11 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:24:40.906 192.168.100.9' 00:24:40.906 16:15:11 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:24:40.906 192.168.100.9' 00:24:40.906 16:15:11 -- nvmf/common.sh@445 -- # head -n 1 00:24:40.906 16:15:11 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:24:40.906 16:15:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:24:40.906 192.168.100.9' 00:24:40.906 16:15:11 -- nvmf/common.sh@446 -- # tail -n +2 00:24:40.906 16:15:11 -- nvmf/common.sh@446 -- # head -n 1 00:24:40.906 16:15:11 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:24:40.906 16:15:11 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:24:40.906 16:15:11 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:24:40.906 16:15:11 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:24:40.906 16:15:11 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:24:40.906 16:15:11 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:24:40.906 16:15:11 -- host/dma.sh@94 -- # nvmfappstart -m 0x3 00:24:40.906 16:15:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:24:40.906 16:15:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:40.906 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:24:40.906 16:15:11 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:40.906 16:15:11 -- nvmf/common.sh@469 -- # nvmfpid=1446324 00:24:40.906 16:15:11 -- nvmf/common.sh@470 -- # waitforlisten 1446324 00:24:40.906 16:15:11 -- common/autotest_common.sh@829 -- # '[' -z 1446324 ']' 00:24:40.906 16:15:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.906 16:15:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:40.906 16:15:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.906 16:15:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:40.906 16:15:11 -- common/autotest_common.sh@10 -- # set +x 00:24:41.166 [2024-11-20 16:15:11.710675] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:41.166 [2024-11-20 16:15:11.710733] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.166 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.166 [2024-11-20 16:15:11.782242] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:41.166 [2024-11-20 16:15:11.820933] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:24:41.166 [2024-11-20 16:15:11.821041] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.166 [2024-11-20 16:15:11.821051] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.166 [2024-11-20 16:15:11.821060] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.166 [2024-11-20 16:15:11.821107] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.166 [2024-11-20 16:15:11.821109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:42.105 16:15:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:42.105 16:15:12 -- common/autotest_common.sh@862 -- # return 0 00:24:42.105 16:15:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:24:42.105 16:15:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:42.105 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:24:42.105 16:15:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.105 16:15:12 -- host/dma.sh@96 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:24:42.105 16:15:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.105 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:24:42.105 [2024-11-20 16:15:12.617037] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xdaab40/0xdaeff0) succeed. 00:24:42.105 [2024-11-20 16:15:12.625831] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xdabff0/0xdf0690) succeed. 00:24:42.105 16:15:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.105 16:15:12 -- host/dma.sh@97 -- # rpc_cmd bdev_malloc_create 256 512 -b Malloc0 00:24:42.105 16:15:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.105 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:24:42.105 Malloc0 00:24:42.105 16:15:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.105 16:15:12 -- host/dma.sh@98 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:42.105 16:15:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.105 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:24:42.105 16:15:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.105 16:15:12 -- host/dma.sh@99 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Malloc0 00:24:42.105 16:15:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.105 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:24:42.105 16:15:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.105 16:15:12 -- host/dma.sh@100 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t rdma -a 192.168.100.8 -s 4420 00:24:42.105 16:15:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.105 16:15:12 -- common/autotest_common.sh@10 -- # set +x 00:24:42.105 [2024-11-20 16:15:12.782270] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:24:42.105 16:15:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.105 16:15:12 -- host/dma.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Nvme0n1 -f -x translate -r /var/tmp/dma.sock 00:24:42.105 16:15:12 -- host/dma.sh@104 -- # gen_nvmf_target_json 0 00:24:42.105 16:15:12 -- nvmf/common.sh@520 -- # config=() 00:24:42.105 16:15:12 -- nvmf/common.sh@520 -- # local subsystem config 00:24:42.105 16:15:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:24:42.105 16:15:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:24:42.105 { 00:24:42.105 "params": { 00:24:42.105 "name": "Nvme$subsystem", 00:24:42.105 "trtype": "$TEST_TRANSPORT", 00:24:42.105 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:42.105 "adrfam": "ipv4", 00:24:42.105 "trsvcid": "$NVMF_PORT", 00:24:42.105 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:42.105 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:42.105 "hdgst": ${hdgst:-false}, 00:24:42.105 "ddgst": ${ddgst:-false} 00:24:42.105 }, 00:24:42.105 "method": "bdev_nvme_attach_controller" 00:24:42.105 } 00:24:42.105 EOF 00:24:42.105 )") 00:24:42.105 16:15:12 -- nvmf/common.sh@542 -- # cat 00:24:42.105 16:15:12 -- nvmf/common.sh@544 -- # jq . 00:24:42.105 16:15:12 -- nvmf/common.sh@545 -- # IFS=, 00:24:42.105 16:15:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:24:42.105 "params": { 00:24:42.105 "name": "Nvme0", 00:24:42.105 "trtype": "rdma", 00:24:42.105 "traddr": "192.168.100.8", 00:24:42.105 "adrfam": "ipv4", 00:24:42.105 "trsvcid": "4420", 00:24:42.105 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:42.105 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:24:42.105 "hdgst": false, 00:24:42.105 "ddgst": false 00:24:42.105 }, 00:24:42.105 "method": "bdev_nvme_attach_controller" 00:24:42.105 }' 00:24:42.106 [2024-11-20 16:15:12.811678] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:42.106 [2024-11-20 16:15:12.811732] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1446609 ] 00:24:42.106 EAL: No free 2048 kB hugepages reported on node 1 00:24:42.106 [2024-11-20 16:15:12.879025] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:42.365 [2024-11-20 16:15:12.916129] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:42.365 [2024-11-20 16:15:12.916132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:47.641 bdev Nvme0n1 reports 1 memory domains 00:24:47.641 bdev Nvme0n1 supports RDMA memory domain 00:24:47.641 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:47.641 ========================================================================== 00:24:47.641 Latency [us] 00:24:47.641 IOPS MiB/s Average min max 00:24:47.641 Core 2: 21908.70 85.58 729.57 234.70 8947.23 00:24:47.641 Core 3: 22191.61 86.69 720.29 232.06 8986.52 00:24:47.641 ========================================================================== 00:24:47.641 Total : 44100.30 172.27 724.90 232.06 8986.52 00:24:47.641 00:24:47.641 Total operations: 220573, translate 220573 pull_push 0 memzero 0 00:24:47.641 16:15:18 -- host/dma.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b Malloc0 -x pull_push -r /var/tmp/dma.sock 00:24:47.641 16:15:18 -- host/dma.sh@107 -- # gen_malloc_json 00:24:47.641 16:15:18 -- host/dma.sh@21 -- # jq . 00:24:47.641 [2024-11-20 16:15:18.337472] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:47.641 [2024-11-20 16:15:18.337539] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1447512 ] 00:24:47.641 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.641 [2024-11-20 16:15:18.404661] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:47.641 [2024-11-20 16:15:18.438998] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:47.641 [2024-11-20 16:15:18.439000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:52.916 bdev Malloc0 reports 1 memory domains 00:24:52.916 bdev Malloc0 doesn't support RDMA memory domain 00:24:52.916 Initialization complete, running randrw IO for 5 sec on 2 cores 00:24:52.916 ========================================================================== 00:24:52.916 Latency [us] 00:24:52.916 IOPS MiB/s Average min max 00:24:52.916 Core 2: 14908.65 58.24 1072.46 362.11 1339.26 00:24:52.916 Core 3: 15150.57 59.18 1055.31 408.98 1927.54 00:24:52.916 ========================================================================== 00:24:52.916 Total : 30059.22 117.42 1063.82 362.11 1927.54 00:24:52.916 00:24:52.916 Total operations: 150346, translate 0 pull_push 601384 memzero 0 00:24:53.175 16:15:23 -- host/dma.sh@110 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randread -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x memzero -r /var/tmp/dma.sock 00:24:53.175 16:15:23 -- host/dma.sh@110 -- # gen_lvol_nvme_json 0 00:24:53.175 16:15:23 -- host/dma.sh@48 -- # local subsystem=0 00:24:53.175 16:15:23 -- host/dma.sh@50 -- # jq . 00:24:53.175 Ignoring -M option 00:24:53.175 [2024-11-20 16:15:23.767661] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:24:53.175 [2024-11-20 16:15:23.767717] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1448497 ] 00:24:53.175 EAL: No free 2048 kB hugepages reported on node 1 00:24:53.175 [2024-11-20 16:15:23.834419] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:53.175 [2024-11-20 16:15:23.869103] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:53.175 [2024-11-20 16:15:23.869106] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:53.434 [2024-11-20 16:15:24.073685] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:24:58.727 [2024-11-20 16:15:29.103146] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:24:58.727 bdev 5b76b3dd-8923-4fb1-92fd-6b1d681882fc reports 1 memory domains 00:24:58.727 bdev 5b76b3dd-8923-4fb1-92fd-6b1d681882fc supports RDMA memory domain 00:24:58.727 Initialization complete, running randread IO for 5 sec on 2 cores 00:24:58.727 ========================================================================== 00:24:58.727 Latency [us] 00:24:58.727 IOPS MiB/s Average min max 00:24:58.727 Core 2: 72694.67 283.96 219.24 58.39 1646.68 00:24:58.727 Core 3: 71527.82 279.41 222.80 68.73 1575.93 00:24:58.727 ========================================================================== 00:24:58.727 Total : 144222.48 563.37 221.00 58.39 1646.68 00:24:58.727 00:24:58.727 Total operations: 721205, translate 0 pull_push 0 memzero 721205 00:24:58.727 16:15:29 -- host/dma.sh@113 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 16 -o 4096 -w write -t 1 -r 'trtype:rdma adrfam:IPV4 traddr:192.168.100.8 trsvcid:4420' 00:24:58.727 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.727 [2024-11-20 16:15:29.404091] subsystem.c:1344:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on RDMA/192.168.100.8/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:25:01.265 Initializing NVMe Controllers 00:25:01.265 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode0 00:25:01.265 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:25:01.265 Initialization complete. Launching workers. 00:25:01.265 ======================================================== 00:25:01.265 Latency(us) 00:25:01.265 Device Information : IOPS MiB/s Average min max 00:25:01.265 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 0: 2016.00 7.88 7972.22 6983.78 7997.22 00:25:01.265 ======================================================== 00:25:01.265 Total : 2016.00 7.88 7972.22 6983.78 7997.22 00:25:01.265 00:25:01.265 16:15:31 -- host/dma.sh@116 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/dma/test_dma/test_dma -q 16 -o 4096 -w randrw -M 70 -t 5 -m 0xc --json /dev/fd/62 -b lvs0/lvol0 -f -x translate -r /var/tmp/dma.sock 00:25:01.265 16:15:31 -- host/dma.sh@116 -- # gen_lvol_nvme_json 0 00:25:01.265 16:15:31 -- host/dma.sh@48 -- # local subsystem=0 00:25:01.265 16:15:31 -- host/dma.sh@50 -- # jq . 00:25:01.265 [2024-11-20 16:15:31.745236] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:01.265 [2024-11-20 16:15:31.745292] [ DPDK EAL parameters: test_dma --no-shconf -c 0xc --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1449847 ] 00:25:01.265 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.265 [2024-11-20 16:15:31.812303] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:01.265 [2024-11-20 16:15:31.849136] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:01.265 [2024-11-20 16:15:31.849139] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:01.265 [2024-11-20 16:15:32.046898] vbdev_lvol_rpc.c: 347:rpc_bdev_lvol_create: *WARNING*: vbdev_lvol_rpc_req_size: deprecated feature rpc_bdev_lvol_create/resize req.size to be removed in v23.09 00:25:06.543 [2024-11-20 16:15:37.076510] app.c: 883:log_deprecation_hits: *WARNING*: vbdev_lvol_rpc_req_size: deprecation 'rpc_bdev_lvol_create/resize req.size' scheduled for removal in v23.09 hit 1 times 00:25:06.543 bdev 256f006d-4f36-479e-9915-0fca03c577a3 reports 1 memory domains 00:25:06.543 bdev 256f006d-4f36-479e-9915-0fca03c577a3 supports RDMA memory domain 00:25:06.543 Initialization complete, running randrw IO for 5 sec on 2 cores 00:25:06.543 ========================================================================== 00:25:06.543 Latency [us] 00:25:06.543 IOPS MiB/s Average min max 00:25:06.543 Core 2: 19354.20 75.60 826.05 15.60 11085.84 00:25:06.543 Core 3: 19754.13 77.16 809.26 16.06 10917.02 00:25:06.543 ========================================================================== 00:25:06.543 Total : 39108.33 152.77 817.57 15.60 11085.84 00:25:06.543 00:25:06.543 Total operations: 195572, translate 195463 pull_push 0 memzero 109 00:25:06.543 16:15:37 -- host/dma.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:25:06.543 16:15:37 -- host/dma.sh@120 -- # nvmftestfini 00:25:06.543 16:15:37 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:06.543 16:15:37 -- nvmf/common.sh@116 -- # sync 00:25:06.543 16:15:37 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:06.543 16:15:37 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:06.543 16:15:37 -- nvmf/common.sh@119 -- # set +e 00:25:06.543 16:15:37 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:06.543 16:15:37 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:06.543 rmmod nvme_rdma 00:25:06.543 rmmod nvme_fabrics 00:25:06.543 16:15:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:06.543 16:15:37 -- nvmf/common.sh@123 -- # set -e 00:25:06.543 16:15:37 -- nvmf/common.sh@124 -- # return 0 00:25:06.543 16:15:37 -- nvmf/common.sh@477 -- # '[' -n 1446324 ']' 00:25:06.543 16:15:37 -- nvmf/common.sh@478 -- # killprocess 1446324 00:25:06.543 16:15:37 -- common/autotest_common.sh@936 -- # '[' -z 1446324 ']' 00:25:06.543 16:15:37 -- common/autotest_common.sh@940 -- # kill -0 1446324 00:25:06.543 16:15:37 -- common/autotest_common.sh@941 -- # uname 00:25:06.802 16:15:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:06.802 16:15:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1446324 00:25:06.802 16:15:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:06.802 16:15:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:06.802 16:15:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1446324' 00:25:06.802 killing process with pid 1446324 00:25:06.802 16:15:37 -- common/autotest_common.sh@955 -- # kill 1446324 00:25:06.802 16:15:37 -- common/autotest_common.sh@960 -- # wait 1446324 00:25:07.061 16:15:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:07.061 16:15:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:07.061 00:25:07.061 real 0m33.177s 00:25:07.061 user 1m36.168s 00:25:07.061 sys 0m6.579s 00:25:07.061 16:15:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:07.061 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:25:07.061 ************************************ 00:25:07.061 END TEST dma 00:25:07.061 ************************************ 00:25:07.061 16:15:37 -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:07.061 16:15:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:07.061 16:15:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:07.061 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:25:07.061 ************************************ 00:25:07.061 START TEST nvmf_identify 00:25:07.061 ************************************ 00:25:07.061 16:15:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=rdma 00:25:07.061 * Looking for test storage... 00:25:07.061 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:07.061 16:15:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:07.061 16:15:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:07.061 16:15:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:07.321 16:15:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:07.321 16:15:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:07.321 16:15:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:07.321 16:15:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:07.321 16:15:37 -- scripts/common.sh@335 -- # IFS=.-: 00:25:07.321 16:15:37 -- scripts/common.sh@335 -- # read -ra ver1 00:25:07.321 16:15:37 -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.321 16:15:37 -- scripts/common.sh@336 -- # read -ra ver2 00:25:07.321 16:15:37 -- scripts/common.sh@337 -- # local 'op=<' 00:25:07.321 16:15:37 -- scripts/common.sh@339 -- # ver1_l=2 00:25:07.321 16:15:37 -- scripts/common.sh@340 -- # ver2_l=1 00:25:07.321 16:15:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:07.321 16:15:37 -- scripts/common.sh@343 -- # case "$op" in 00:25:07.321 16:15:37 -- scripts/common.sh@344 -- # : 1 00:25:07.321 16:15:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:07.321 16:15:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.321 16:15:37 -- scripts/common.sh@364 -- # decimal 1 00:25:07.321 16:15:37 -- scripts/common.sh@352 -- # local d=1 00:25:07.321 16:15:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.321 16:15:37 -- scripts/common.sh@354 -- # echo 1 00:25:07.321 16:15:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:07.321 16:15:37 -- scripts/common.sh@365 -- # decimal 2 00:25:07.321 16:15:37 -- scripts/common.sh@352 -- # local d=2 00:25:07.321 16:15:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.321 16:15:37 -- scripts/common.sh@354 -- # echo 2 00:25:07.321 16:15:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:07.321 16:15:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:07.321 16:15:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:07.321 16:15:37 -- scripts/common.sh@367 -- # return 0 00:25:07.321 16:15:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.321 16:15:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.321 --rc genhtml_branch_coverage=1 00:25:07.321 --rc genhtml_function_coverage=1 00:25:07.321 --rc genhtml_legend=1 00:25:07.321 --rc geninfo_all_blocks=1 00:25:07.321 --rc geninfo_unexecuted_blocks=1 00:25:07.321 00:25:07.321 ' 00:25:07.321 16:15:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.321 --rc genhtml_branch_coverage=1 00:25:07.321 --rc genhtml_function_coverage=1 00:25:07.321 --rc genhtml_legend=1 00:25:07.321 --rc geninfo_all_blocks=1 00:25:07.321 --rc geninfo_unexecuted_blocks=1 00:25:07.321 00:25:07.321 ' 00:25:07.321 16:15:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.321 --rc genhtml_branch_coverage=1 00:25:07.321 --rc genhtml_function_coverage=1 00:25:07.321 --rc genhtml_legend=1 00:25:07.321 --rc geninfo_all_blocks=1 00:25:07.321 --rc geninfo_unexecuted_blocks=1 00:25:07.321 00:25:07.321 ' 00:25:07.321 16:15:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:07.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.321 --rc genhtml_branch_coverage=1 00:25:07.321 --rc genhtml_function_coverage=1 00:25:07.321 --rc genhtml_legend=1 00:25:07.321 --rc geninfo_all_blocks=1 00:25:07.321 --rc geninfo_unexecuted_blocks=1 00:25:07.321 00:25:07.321 ' 00:25:07.321 16:15:37 -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.321 16:15:37 -- nvmf/common.sh@7 -- # uname -s 00:25:07.321 16:15:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.321 16:15:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.321 16:15:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.321 16:15:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.321 16:15:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.321 16:15:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.321 16:15:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.321 16:15:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.321 16:15:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.321 16:15:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.321 16:15:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:07.321 16:15:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:07.321 16:15:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.321 16:15:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.321 16:15:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.321 16:15:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:07.321 16:15:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.321 16:15:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.321 16:15:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.321 16:15:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.322 16:15:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.322 16:15:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.322 16:15:37 -- paths/export.sh@5 -- # export PATH 00:25:07.322 16:15:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.322 16:15:37 -- nvmf/common.sh@46 -- # : 0 00:25:07.322 16:15:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:07.322 16:15:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:07.322 16:15:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:07.322 16:15:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.322 16:15:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.322 16:15:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:07.322 16:15:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:07.322 16:15:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:07.322 16:15:37 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:07.322 16:15:37 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:07.322 16:15:37 -- host/identify.sh@14 -- # nvmftestinit 00:25:07.322 16:15:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:07.322 16:15:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.322 16:15:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:07.322 16:15:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:07.322 16:15:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:07.322 16:15:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.322 16:15:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:07.322 16:15:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.322 16:15:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:07.322 16:15:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:07.322 16:15:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:07.322 16:15:37 -- common/autotest_common.sh@10 -- # set +x 00:25:14.002 16:15:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:14.002 16:15:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:14.002 16:15:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:14.002 16:15:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:14.002 16:15:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:14.002 16:15:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:14.002 16:15:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:14.002 16:15:43 -- nvmf/common.sh@294 -- # net_devs=() 00:25:14.002 16:15:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:14.002 16:15:43 -- nvmf/common.sh@295 -- # e810=() 00:25:14.002 16:15:43 -- nvmf/common.sh@295 -- # local -ga e810 00:25:14.002 16:15:43 -- nvmf/common.sh@296 -- # x722=() 00:25:14.002 16:15:43 -- nvmf/common.sh@296 -- # local -ga x722 00:25:14.002 16:15:43 -- nvmf/common.sh@297 -- # mlx=() 00:25:14.002 16:15:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:14.002 16:15:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.002 16:15:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:14.002 16:15:43 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:14.002 16:15:43 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:14.002 16:15:43 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:14.002 16:15:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:14.002 16:15:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.002 16:15:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:14.002 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:14.002 16:15:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:14.002 16:15:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:14.002 16:15:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:14.002 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:14.002 16:15:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:14.002 16:15:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:14.002 16:15:43 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.002 16:15:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.002 16:15:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.002 16:15:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.002 16:15:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:14.002 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:14.002 16:15:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.002 16:15:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:14.002 16:15:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.002 16:15:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:14.002 16:15:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.002 16:15:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:14.002 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:14.002 16:15:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.002 16:15:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:14.002 16:15:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:14.002 16:15:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:14.002 16:15:43 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:14.002 16:15:43 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:14.002 16:15:43 -- nvmf/common.sh@57 -- # uname 00:25:14.002 16:15:43 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:14.002 16:15:43 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:14.002 16:15:43 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:14.002 16:15:43 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:14.002 16:15:43 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:14.002 16:15:43 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:14.002 16:15:43 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:14.002 16:15:43 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:14.002 16:15:43 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:14.002 16:15:43 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:14.003 16:15:43 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:14.003 16:15:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:14.003 16:15:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:14.003 16:15:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:14.003 16:15:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:14.003 16:15:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:14.003 16:15:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:14.003 16:15:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:14.003 16:15:43 -- nvmf/common.sh@104 -- # continue 2 00:25:14.003 16:15:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:14.003 16:15:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:14.003 16:15:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:14.003 16:15:43 -- nvmf/common.sh@104 -- # continue 2 00:25:14.003 16:15:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:14.003 16:15:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:14.003 16:15:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:14.003 16:15:43 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:14.003 16:15:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:14.003 16:15:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:14.003 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:14.003 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:14.003 altname enp217s0f0np0 00:25:14.003 altname ens818f0np0 00:25:14.003 inet 192.168.100.8/24 scope global mlx_0_0 00:25:14.003 valid_lft forever preferred_lft forever 00:25:14.003 16:15:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:14.003 16:15:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:14.003 16:15:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:14.003 16:15:43 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:14.003 16:15:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:14.003 16:15:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:14.003 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:14.003 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:14.003 altname enp217s0f1np1 00:25:14.003 altname ens818f1np1 00:25:14.003 inet 192.168.100.9/24 scope global mlx_0_1 00:25:14.003 valid_lft forever preferred_lft forever 00:25:14.003 16:15:43 -- nvmf/common.sh@410 -- # return 0 00:25:14.003 16:15:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:14.003 16:15:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:14.003 16:15:43 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:14.003 16:15:43 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:14.003 16:15:43 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:14.003 16:15:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:14.003 16:15:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:14.003 16:15:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:14.003 16:15:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:14.003 16:15:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:14.003 16:15:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:14.003 16:15:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:14.003 16:15:43 -- nvmf/common.sh@104 -- # continue 2 00:25:14.003 16:15:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:14.003 16:15:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:14.003 16:15:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:14.003 16:15:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:14.003 16:15:43 -- nvmf/common.sh@104 -- # continue 2 00:25:14.003 16:15:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:14.003 16:15:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:14.003 16:15:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:14.003 16:15:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:14.003 16:15:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:14.003 16:15:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:14.003 16:15:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:14.003 16:15:43 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:14.003 192.168.100.9' 00:25:14.003 16:15:43 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:14.003 192.168.100.9' 00:25:14.003 16:15:43 -- nvmf/common.sh@445 -- # head -n 1 00:25:14.003 16:15:43 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:14.003 16:15:43 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:14.003 192.168.100.9' 00:25:14.003 16:15:43 -- nvmf/common.sh@446 -- # tail -n +2 00:25:14.003 16:15:43 -- nvmf/common.sh@446 -- # head -n 1 00:25:14.003 16:15:43 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:14.003 16:15:43 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:14.003 16:15:43 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:14.003 16:15:43 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:14.003 16:15:43 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:14.003 16:15:43 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:14.003 16:15:43 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:14.003 16:15:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:14.003 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:25:14.003 16:15:43 -- host/identify.sh@19 -- # nvmfpid=1453992 00:25:14.003 16:15:43 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:14.003 16:15:43 -- host/identify.sh@23 -- # waitforlisten 1453992 00:25:14.003 16:15:43 -- common/autotest_common.sh@829 -- # '[' -z 1453992 ']' 00:25:14.003 16:15:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.003 16:15:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:14.003 16:15:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.003 16:15:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:14.003 16:15:43 -- host/identify.sh@18 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:14.003 16:15:43 -- common/autotest_common.sh@10 -- # set +x 00:25:14.003 [2024-11-20 16:15:44.024208] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:14.003 [2024-11-20 16:15:44.024258] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.003 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.003 [2024-11-20 16:15:44.095102] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.003 [2024-11-20 16:15:44.133601] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:14.003 [2024-11-20 16:15:44.133714] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.003 [2024-11-20 16:15:44.133724] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.003 [2024-11-20 16:15:44.133733] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.003 [2024-11-20 16:15:44.133774] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.003 [2024-11-20 16:15:44.133871] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.003 [2024-11-20 16:15:44.133955] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.003 [2024-11-20 16:15:44.133957] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.263 16:15:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:14.263 16:15:44 -- common/autotest_common.sh@862 -- # return 0 00:25:14.263 16:15:44 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:25:14.263 16:15:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.263 16:15:44 -- common/autotest_common.sh@10 -- # set +x 00:25:14.263 [2024-11-20 16:15:44.864853] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x9cf0d0/0x9d35a0) succeed. 00:25:14.263 [2024-11-20 16:15:44.874270] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x9d0670/0xa14c40) succeed. 00:25:14.263 16:15:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.263 16:15:44 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:14.263 16:15:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:14.263 16:15:44 -- common/autotest_common.sh@10 -- # set +x 00:25:14.263 16:15:45 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:14.263 16:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.263 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:25:14.263 Malloc0 00:25:14.263 16:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.263 16:15:45 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.263 16:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.263 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:25:14.263 16:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.263 16:15:45 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:14.263 16:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.263 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 16:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.525 16:15:45 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:14.525 16:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.525 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:25:14.525 [2024-11-20 16:15:45.080358] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:14.525 16:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.525 16:15:45 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:14.526 16:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.526 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:25:14.526 16:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.526 16:15:45 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:14.526 16:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.526 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:25:14.526 [2024-11-20 16:15:45.096024] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:25:14.526 [ 00:25:14.526 { 00:25:14.526 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:14.526 "subtype": "Discovery", 00:25:14.526 "listen_addresses": [ 00:25:14.526 { 00:25:14.526 "transport": "RDMA", 00:25:14.526 "trtype": "RDMA", 00:25:14.526 "adrfam": "IPv4", 00:25:14.526 "traddr": "192.168.100.8", 00:25:14.526 "trsvcid": "4420" 00:25:14.526 } 00:25:14.526 ], 00:25:14.526 "allow_any_host": true, 00:25:14.526 "hosts": [] 00:25:14.526 }, 00:25:14.526 { 00:25:14.526 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.526 "subtype": "NVMe", 00:25:14.526 "listen_addresses": [ 00:25:14.526 { 00:25:14.526 "transport": "RDMA", 00:25:14.526 "trtype": "RDMA", 00:25:14.526 "adrfam": "IPv4", 00:25:14.526 "traddr": "192.168.100.8", 00:25:14.526 "trsvcid": "4420" 00:25:14.526 } 00:25:14.526 ], 00:25:14.526 "allow_any_host": true, 00:25:14.526 "hosts": [], 00:25:14.526 "serial_number": "SPDK00000000000001", 00:25:14.526 "model_number": "SPDK bdev Controller", 00:25:14.526 "max_namespaces": 32, 00:25:14.526 "min_cntlid": 1, 00:25:14.526 "max_cntlid": 65519, 00:25:14.526 "namespaces": [ 00:25:14.526 { 00:25:14.526 "nsid": 1, 00:25:14.526 "bdev_name": "Malloc0", 00:25:14.526 "name": "Malloc0", 00:25:14.526 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:14.526 "eui64": "ABCDEF0123456789", 00:25:14.526 "uuid": "2a501949-4597-4957-9174-0295b9b955dd" 00:25:14.526 } 00:25:14.526 ] 00:25:14.526 } 00:25:14.526 ] 00:25:14.526 16:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.526 16:15:45 -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:14.526 [2024-11-20 16:15:45.129277] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:14.526 [2024-11-20 16:15:45.129327] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454134 ] 00:25:14.526 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.526 [2024-11-20 16:15:45.177706] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:25:14.526 [2024-11-20 16:15:45.177776] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:14.526 [2024-11-20 16:15:45.177796] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:14.526 [2024-11-20 16:15:45.177801] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:14.526 [2024-11-20 16:15:45.177831] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:25:14.526 [2024-11-20 16:15:45.195988] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:14.526 [2024-11-20 16:15:45.210112] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:14.526 [2024-11-20 16:15:45.210122] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:14.526 [2024-11-20 16:15:45.210130] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210137] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210143] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210150] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210156] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210162] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210168] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210175] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210181] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210187] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210193] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210199] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210205] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210212] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210218] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210224] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210230] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210236] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210242] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210248] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210255] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210261] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210270] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210276] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210283] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210289] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210295] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210301] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210307] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210313] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210319] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210325] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:14.526 [2024-11-20 16:15:45.210331] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:14.526 [2024-11-20 16:15:45.210335] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:14.526 [2024-11-20 16:15:45.210356] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.210369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183f00 00:25:14.526 [2024-11-20 16:15:45.215523] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.526 [2024-11-20 16:15:45.215534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.526 [2024-11-20 16:15:45.215542] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.215549] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:14.526 [2024-11-20 16:15:45.215555] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:25:14.526 [2024-11-20 16:15:45.215562] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:25:14.526 [2024-11-20 16:15:45.215574] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.215582] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.526 [2024-11-20 16:15:45.215611] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.526 [2024-11-20 16:15:45.215617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:14.526 [2024-11-20 16:15:45.215624] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:25:14.526 [2024-11-20 16:15:45.215630] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.215637] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:25:14.526 [2024-11-20 16:15:45.215645] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.215652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.526 [2024-11-20 16:15:45.215672] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.526 [2024-11-20 16:15:45.215679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:14.526 [2024-11-20 16:15:45.215686] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:25:14.526 [2024-11-20 16:15:45.215692] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183f00 00:25:14.526 [2024-11-20 16:15:45.215699] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:25:14.526 [2024-11-20 16:15:45.215707] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.215715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.527 [2024-11-20 16:15:45.215732] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.215738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.215744] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:14.527 [2024-11-20 16:15:45.215750] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.215758] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.215766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.527 [2024-11-20 16:15:45.215784] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.215790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.215796] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:25:14.527 [2024-11-20 16:15:45.215802] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:25:14.527 [2024-11-20 16:15:45.215808] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.215815] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:14.527 [2024-11-20 16:15:45.215921] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:25:14.527 [2024-11-20 16:15:45.215927] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:14.527 [2024-11-20 16:15:45.215936] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.215944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.527 [2024-11-20 16:15:45.215967] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.215972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.215979] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:14.527 [2024-11-20 16:15:45.215985] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.215993] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.527 [2024-11-20 16:15:45.216021] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.216027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.216033] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:14.527 [2024-11-20 16:15:45.216039] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:25:14.527 [2024-11-20 16:15:45.216045] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216052] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:25:14.527 [2024-11-20 16:15:45.216061] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:25:14.527 [2024-11-20 16:15:45.216069] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183f00 00:25:14.527 [2024-11-20 16:15:45.216115] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.216121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.216130] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:25:14.527 [2024-11-20 16:15:45.216135] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:25:14.527 [2024-11-20 16:15:45.216141] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:25:14.527 [2024-11-20 16:15:45.216147] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:25:14.527 [2024-11-20 16:15:45.216153] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:25:14.527 [2024-11-20 16:15:45.216159] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:25:14.527 [2024-11-20 16:15:45.216165] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216175] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:25:14.527 [2024-11-20 16:15:45.216183] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.527 [2024-11-20 16:15:45.216210] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.216216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.216224] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.527 [2024-11-20 16:15:45.216238] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.527 [2024-11-20 16:15:45.216254] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.527 [2024-11-20 16:15:45.216267] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.527 [2024-11-20 16:15:45.216280] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:25:14.527 [2024-11-20 16:15:45.216286] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216296] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:14.527 [2024-11-20 16:15:45.216304] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216311] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.527 [2024-11-20 16:15:45.216330] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.216336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.216343] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:25:14.527 [2024-11-20 16:15:45.216349] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:25:14.527 [2024-11-20 16:15:45.216355] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216363] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183f00 00:25:14.527 [2024-11-20 16:15:45.216394] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.216400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.216407] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216417] nvme_ctrlr.c:4024:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:25:14.527 [2024-11-20 16:15:45.216440] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x400 key:0x183f00 00:25:14.527 [2024-11-20 16:15:45.216456] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216463] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.527 [2024-11-20 16:15:45.216477] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.216483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.216496] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216504] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0xc00 key:0x183f00 00:25:14.527 [2024-11-20 16:15:45.216510] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216521] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.527 [2024-11-20 16:15:45.216527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:14.527 [2024-11-20 16:15:45.216533] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183f00 00:25:14.527 [2024-11-20 16:15:45.216539] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.528 [2024-11-20 16:15:45.216545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:14.528 [2024-11-20 16:15:45.216554] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183f00 00:25:14.528 [2024-11-20 16:15:45.216561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:0 cdw10:00010070 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x8 key:0x183f00 00:25:14.528 [2024-11-20 16:15:45.216569] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183f00 00:25:14.528 [2024-11-20 16:15:45.216588] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.528 [2024-11-20 16:15:45.216594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.528 [2024-11-20 16:15:45.216606] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183f00 00:25:14.528 ===================================================== 00:25:14.528 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:14.528 ===================================================== 00:25:14.528 Controller Capabilities/Features 00:25:14.528 ================================ 00:25:14.528 Vendor ID: 0000 00:25:14.528 Subsystem Vendor ID: 0000 00:25:14.528 Serial Number: .................... 00:25:14.528 Model Number: ........................................ 00:25:14.528 Firmware Version: 24.01.1 00:25:14.528 Recommended Arb Burst: 0 00:25:14.528 IEEE OUI Identifier: 00 00 00 00:25:14.528 Multi-path I/O 00:25:14.528 May have multiple subsystem ports: No 00:25:14.528 May have multiple controllers: No 00:25:14.528 Associated with SR-IOV VF: No 00:25:14.528 Max Data Transfer Size: 131072 00:25:14.528 Max Number of Namespaces: 0 00:25:14.528 Max Number of I/O Queues: 1024 00:25:14.528 NVMe Specification Version (VS): 1.3 00:25:14.528 NVMe Specification Version (Identify): 1.3 00:25:14.528 Maximum Queue Entries: 128 00:25:14.528 Contiguous Queues Required: Yes 00:25:14.528 Arbitration Mechanisms Supported 00:25:14.528 Weighted Round Robin: Not Supported 00:25:14.528 Vendor Specific: Not Supported 00:25:14.528 Reset Timeout: 15000 ms 00:25:14.528 Doorbell Stride: 4 bytes 00:25:14.528 NVM Subsystem Reset: Not Supported 00:25:14.528 Command Sets Supported 00:25:14.528 NVM Command Set: Supported 00:25:14.528 Boot Partition: Not Supported 00:25:14.528 Memory Page Size Minimum: 4096 bytes 00:25:14.528 Memory Page Size Maximum: 4096 bytes 00:25:14.528 Persistent Memory Region: Not Supported 00:25:14.528 Optional Asynchronous Events Supported 00:25:14.528 Namespace Attribute Notices: Not Supported 00:25:14.528 Firmware Activation Notices: Not Supported 00:25:14.528 ANA Change Notices: Not Supported 00:25:14.528 PLE Aggregate Log Change Notices: Not Supported 00:25:14.528 LBA Status Info Alert Notices: Not Supported 00:25:14.528 EGE Aggregate Log Change Notices: Not Supported 00:25:14.528 Normal NVM Subsystem Shutdown event: Not Supported 00:25:14.528 Zone Descriptor Change Notices: Not Supported 00:25:14.528 Discovery Log Change Notices: Supported 00:25:14.528 Controller Attributes 00:25:14.528 128-bit Host Identifier: Not Supported 00:25:14.528 Non-Operational Permissive Mode: Not Supported 00:25:14.528 NVM Sets: Not Supported 00:25:14.528 Read Recovery Levels: Not Supported 00:25:14.528 Endurance Groups: Not Supported 00:25:14.528 Predictable Latency Mode: Not Supported 00:25:14.528 Traffic Based Keep ALive: Not Supported 00:25:14.528 Namespace Granularity: Not Supported 00:25:14.528 SQ Associations: Not Supported 00:25:14.528 UUID List: Not Supported 00:25:14.528 Multi-Domain Subsystem: Not Supported 00:25:14.528 Fixed Capacity Management: Not Supported 00:25:14.528 Variable Capacity Management: Not Supported 00:25:14.528 Delete Endurance Group: Not Supported 00:25:14.528 Delete NVM Set: Not Supported 00:25:14.528 Extended LBA Formats Supported: Not Supported 00:25:14.528 Flexible Data Placement Supported: Not Supported 00:25:14.528 00:25:14.528 Controller Memory Buffer Support 00:25:14.528 ================================ 00:25:14.528 Supported: No 00:25:14.528 00:25:14.528 Persistent Memory Region Support 00:25:14.528 ================================ 00:25:14.528 Supported: No 00:25:14.528 00:25:14.528 Admin Command Set Attributes 00:25:14.528 ============================ 00:25:14.528 Security Send/Receive: Not Supported 00:25:14.528 Format NVM: Not Supported 00:25:14.528 Firmware Activate/Download: Not Supported 00:25:14.528 Namespace Management: Not Supported 00:25:14.528 Device Self-Test: Not Supported 00:25:14.528 Directives: Not Supported 00:25:14.528 NVMe-MI: Not Supported 00:25:14.528 Virtualization Management: Not Supported 00:25:14.528 Doorbell Buffer Config: Not Supported 00:25:14.528 Get LBA Status Capability: Not Supported 00:25:14.528 Command & Feature Lockdown Capability: Not Supported 00:25:14.528 Abort Command Limit: 1 00:25:14.528 Async Event Request Limit: 4 00:25:14.528 Number of Firmware Slots: N/A 00:25:14.528 Firmware Slot 1 Read-Only: N/A 00:25:14.528 Firmware Activation Without Reset: N/A 00:25:14.528 Multiple Update Detection Support: N/A 00:25:14.528 Firmware Update Granularity: No Information Provided 00:25:14.528 Per-Namespace SMART Log: No 00:25:14.528 Asymmetric Namespace Access Log Page: Not Supported 00:25:14.528 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:14.528 Command Effects Log Page: Not Supported 00:25:14.528 Get Log Page Extended Data: Supported 00:25:14.528 Telemetry Log Pages: Not Supported 00:25:14.528 Persistent Event Log Pages: Not Supported 00:25:14.528 Supported Log Pages Log Page: May Support 00:25:14.528 Commands Supported & Effects Log Page: Not Supported 00:25:14.528 Feature Identifiers & Effects Log Page:May Support 00:25:14.528 NVMe-MI Commands & Effects Log Page: May Support 00:25:14.528 Data Area 4 for Telemetry Log: Not Supported 00:25:14.528 Error Log Page Entries Supported: 128 00:25:14.528 Keep Alive: Not Supported 00:25:14.528 00:25:14.528 NVM Command Set Attributes 00:25:14.528 ========================== 00:25:14.528 Submission Queue Entry Size 00:25:14.528 Max: 1 00:25:14.528 Min: 1 00:25:14.528 Completion Queue Entry Size 00:25:14.528 Max: 1 00:25:14.528 Min: 1 00:25:14.528 Number of Namespaces: 0 00:25:14.528 Compare Command: Not Supported 00:25:14.528 Write Uncorrectable Command: Not Supported 00:25:14.528 Dataset Management Command: Not Supported 00:25:14.528 Write Zeroes Command: Not Supported 00:25:14.528 Set Features Save Field: Not Supported 00:25:14.528 Reservations: Not Supported 00:25:14.528 Timestamp: Not Supported 00:25:14.528 Copy: Not Supported 00:25:14.528 Volatile Write Cache: Not Present 00:25:14.528 Atomic Write Unit (Normal): 1 00:25:14.528 Atomic Write Unit (PFail): 1 00:25:14.528 Atomic Compare & Write Unit: 1 00:25:14.528 Fused Compare & Write: Supported 00:25:14.528 Scatter-Gather List 00:25:14.528 SGL Command Set: Supported 00:25:14.528 SGL Keyed: Supported 00:25:14.528 SGL Bit Bucket Descriptor: Not Supported 00:25:14.528 SGL Metadata Pointer: Not Supported 00:25:14.528 Oversized SGL: Not Supported 00:25:14.528 SGL Metadata Address: Not Supported 00:25:14.528 SGL Offset: Supported 00:25:14.528 Transport SGL Data Block: Not Supported 00:25:14.528 Replay Protected Memory Block: Not Supported 00:25:14.528 00:25:14.528 Firmware Slot Information 00:25:14.528 ========================= 00:25:14.528 Active slot: 0 00:25:14.528 00:25:14.528 00:25:14.528 Error Log 00:25:14.528 ========= 00:25:14.528 00:25:14.528 Active Namespaces 00:25:14.528 ================= 00:25:14.528 Discovery Log Page 00:25:14.528 ================== 00:25:14.528 Generation Counter: 2 00:25:14.528 Number of Records: 2 00:25:14.528 Record Format: 0 00:25:14.528 00:25:14.528 Discovery Log Entry 0 00:25:14.528 ---------------------- 00:25:14.528 Transport Type: 1 (RDMA) 00:25:14.528 Address Family: 1 (IPv4) 00:25:14.528 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:14.528 Entry Flags: 00:25:14.528 Duplicate Returned Information: 1 00:25:14.528 Explicit Persistent Connection Support for Discovery: 1 00:25:14.528 Transport Requirements: 00:25:14.528 Secure Channel: Not Required 00:25:14.528 Port ID: 0 (0x0000) 00:25:14.528 Controller ID: 65535 (0xffff) 00:25:14.528 Admin Max SQ Size: 128 00:25:14.528 Transport Service Identifier: 4420 00:25:14.528 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:14.528 Transport Address: 192.168.100.8 00:25:14.528 Transport Specific Address Subtype - RDMA 00:25:14.528 RDMA QP Service Type: 1 (Reliable Connected) 00:25:14.528 RDMA Provider Type: 1 (No provider specified) 00:25:14.528 RDMA CM Service: 1 (RDMA_CM) 00:25:14.528 Discovery Log Entry 1 00:25:14.528 ---------------------- 00:25:14.529 Transport Type: 1 (RDMA) 00:25:14.529 Address Family: 1 (IPv4) 00:25:14.529 Subsystem Type: 2 (NVM Subsystem) 00:25:14.529 Entry Flags: 00:25:14.529 Duplicate Returned Information: 0 00:25:14.529 Explicit Persistent Connection Support for Discovery: 0 00:25:14.529 Transport Requirements: 00:25:14.529 Secure Channel: Not Required 00:25:14.529 Port ID: 0 (0x0000) 00:25:14.529 Controller ID: 65535 (0xffff) 00:25:14.529 Admin Max SQ Size: [2024-11-20 16:15:45.216679] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:25:14.529 [2024-11-20 16:15:45.216691] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21359 doesn't match qid 00:25:14.529 [2024-11-20 16:15:45.216706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32648 cdw0:5 sqhd:6e28 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.216713] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21359 doesn't match qid 00:25:14.529 [2024-11-20 16:15:45.216721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32648 cdw0:5 sqhd:6e28 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.216727] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21359 doesn't match qid 00:25:14.529 [2024-11-20 16:15:45.216735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32648 cdw0:5 sqhd:6e28 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.216742] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 21359 doesn't match qid 00:25:14.529 [2024-11-20 16:15:45.216749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32648 cdw0:5 sqhd:6e28 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.216758] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.216766] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.216783] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.216789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0010 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.216797] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.216805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.216813] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.216829] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.216835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.216843] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:25:14.529 [2024-11-20 16:15:45.216849] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:25:14.529 [2024-11-20 16:15:45.216855] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.216863] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.216871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.216895] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.216901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.216908] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.216917] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.216925] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.216941] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.216946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.216953] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.216962] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.216971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.216988] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.216994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217000] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217009] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217018] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.217034] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.217040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217047] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217056] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.217083] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.217091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217098] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217107] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217117] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.217140] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.217146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217152] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217161] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217170] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.217188] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.217194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217201] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217210] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217218] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.217237] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.217243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217249] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217258] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217266] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.217286] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.217291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217298] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217307] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.217335] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.217341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217348] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217357] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.217380] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.217387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217394] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217402] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.529 [2024-11-20 16:15:45.217430] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.529 [2024-11-20 16:15:45.217435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:14.529 [2024-11-20 16:15:45.217442] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217450] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.529 [2024-11-20 16:15:45.217458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217478] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217490] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217499] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217506] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217530] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217542] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217551] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217575] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217588] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217597] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217625] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217637] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217645] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217653] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217671] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217683] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217692] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217700] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217714] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217726] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217735] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217758] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217770] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217779] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217787] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217801] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217813] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217821] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217849] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217862] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217871] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217878] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217895] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217908] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217918] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217942] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.217956] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217966] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.217975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.217990] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.217996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.218003] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.218013] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.218021] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.218041] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.218047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.218055] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.218065] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.218073] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.218093] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.218099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.218107] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.218116] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.218124] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.530 [2024-11-20 16:15:45.218146] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.530 [2024-11-20 16:15:45.218151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:14.530 [2024-11-20 16:15:45.218158] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.218166] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.530 [2024-11-20 16:15:45.218174] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218190] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218202] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218211] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218236] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218248] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218256] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218264] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218285] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218298] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218308] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218317] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218334] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218346] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218356] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218378] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218390] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218398] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218420] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218432] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218442] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218449] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218471] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218483] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218491] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218501] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218527] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218539] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218547] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218575] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218587] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218595] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218603] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218621] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218634] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218644] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218652] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218670] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218682] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218690] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218713] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218726] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218734] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218764] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218776] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218786] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218794] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218809] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218821] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218830] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218837] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218861] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218873] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218882] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218890] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218909] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218920] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218929] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218939] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.218955] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.218960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.218967] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218975] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.218984] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.531 [2024-11-20 16:15:45.219004] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.531 [2024-11-20 16:15:45.219010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:14.531 [2024-11-20 16:15:45.219017] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.219027] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.531 [2024-11-20 16:15:45.219035] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219054] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219066] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219076] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219110] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219123] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219133] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219159] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219171] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219180] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219188] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219203] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219215] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219224] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219231] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219254] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219266] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219274] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219298] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219310] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219319] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219342] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219356] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219364] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219373] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219390] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219403] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219413] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219421] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219437] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219452] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219461] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.219489] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.219496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.219503] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.219512] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.223528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.532 [2024-11-20 16:15:45.223542] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.532 [2024-11-20 16:15:45.223548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0009 p:0 m:0 dnr:0 00:25:14.532 [2024-11-20 16:15:45.223554] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183f00 00:25:14.532 [2024-11-20 16:15:45.223561] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:25:14.532 128 00:25:14.532 Transport Service Identifier: 4420 00:25:14.532 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:14.532 Transport Address: 192.168.100.8 00:25:14.532 Transport Specific Address Subtype - RDMA 00:25:14.532 RDMA QP Service Type: 1 (Reliable Connected) 00:25:14.532 RDMA Provider Type: 1 (No provider specified) 00:25:14.532 RDMA CM Service: 1 (RDMA_CM) 00:25:14.532 16:15:45 -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:14.532 [2024-11-20 16:15:45.287939] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:14.532 [2024-11-20 16:15:45.287980] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1454154 ] 00:25:14.532 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.795 [2024-11-20 16:15:45.332717] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:25:14.795 [2024-11-20 16:15:45.332779] nvme_rdma.c:2257:nvme_rdma_ctrlr_construct: *DEBUG*: successfully initialized the nvmf ctrlr 00:25:14.795 [2024-11-20 16:15:45.332802] nvme_rdma.c:1287:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: adrfam 1 ai_family 2 00:25:14.795 [2024-11-20 16:15:45.332807] nvme_rdma.c:1291:nvme_rdma_ctrlr_connect_qpair: *DEBUG*: trsvcid is 4420 00:25:14.795 [2024-11-20 16:15:45.332830] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:25:14.795 [2024-11-20 16:15:45.344955] nvme_rdma.c: 506:nvme_rdma_qpair_process_cm_event: *DEBUG*: Requested queue depth 32. Target receive queue depth 32. 00:25:14.795 [2024-11-20 16:15:45.355024] nvme_rdma.c:1176:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:14.795 [2024-11-20 16:15:45.355035] nvme_rdma.c:1181:nvme_rdma_connect_established: *DEBUG*: RDMA requests created 00:25:14.795 [2024-11-20 16:15:45.355041] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355048] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355055] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355061] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355068] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355074] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355080] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355086] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355092] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355098] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355104] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355110] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355116] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355122] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355128] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355134] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355140] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355146] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355152] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355158] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355165] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355171] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355180] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355186] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355192] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355198] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355204] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355210] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355216] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355222] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355228] nvme_rdma.c: 964:nvme_rdma_create_rsps: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183f00 00:25:14.795 [2024-11-20 16:15:45.355234] nvme_rdma.c:1195:nvme_rdma_connect_established: *DEBUG*: RDMA responses created 00:25:14.795 [2024-11-20 16:15:45.355239] nvme_rdma.c:1198:nvme_rdma_connect_established: *DEBUG*: rc =0 00:25:14.796 [2024-11-20 16:15:45.355244] nvme_rdma.c:1203:nvme_rdma_connect_established: *DEBUG*: RDMA responses submitted 00:25:14.796 [2024-11-20 16:15:45.355261] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.355273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf240 len:0x400 key:0x183f00 00:25:14.796 [2024-11-20 16:15:45.360522] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.796 [2024-11-20 16:15:45.360531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.796 [2024-11-20 16:15:45.360539] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360548] nvme_fabric.c: 620:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:14.796 [2024-11-20 16:15:45.360554] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:25:14.796 [2024-11-20 16:15:45.360561] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:25:14.796 [2024-11-20 16:15:45.360572] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360580] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.796 [2024-11-20 16:15:45.360597] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.796 [2024-11-20 16:15:45.360603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:10300 sqhd:0002 p:0 m:0 dnr:0 00:25:14.796 [2024-11-20 16:15:45.360609] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:25:14.796 [2024-11-20 16:15:45.360615] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360622] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:25:14.796 [2024-11-20 16:15:45.360630] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.796 [2024-11-20 16:15:45.360654] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.796 [2024-11-20 16:15:45.360661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1e01007f sqhd:0003 p:0 m:0 dnr:0 00:25:14.796 [2024-11-20 16:15:45.360668] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:25:14.796 [2024-11-20 16:15:45.360674] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360681] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:25:14.796 [2024-11-20 16:15:45.360688] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.796 [2024-11-20 16:15:45.360714] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.796 [2024-11-20 16:15:45.360720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:25:14.796 [2024-11-20 16:15:45.360726] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:14.796 [2024-11-20 16:15:45.360732] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360741] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360748] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.796 [2024-11-20 16:15:45.360766] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.796 [2024-11-20 16:15:45.360772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:25:14.796 [2024-11-20 16:15:45.360778] nvme_ctrlr.c:3737:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:25:14.796 [2024-11-20 16:15:45.360784] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:25:14.796 [2024-11-20 16:15:45.360790] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360797] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:14.796 [2024-11-20 16:15:45.360903] nvme_ctrlr.c:3930:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:25:14.796 [2024-11-20 16:15:45.360908] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:14.796 [2024-11-20 16:15:45.360917] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.796 [2024-11-20 16:15:45.360946] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.796 [2024-11-20 16:15:45.360952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:25:14.796 [2024-11-20 16:15:45.360958] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:14.796 [2024-11-20 16:15:45.360964] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360972] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.360980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.796 [2024-11-20 16:15:45.360994] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.796 [2024-11-20 16:15:45.360999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.796 [2024-11-20 16:15:45.361005] nvme_ctrlr.c:3772:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:14.796 [2024-11-20 16:15:45.361011] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:25:14.796 [2024-11-20 16:15:45.361017] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361024] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:25:14.796 [2024-11-20 16:15:45.361032] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:25:14.796 [2024-11-20 16:15:45.361041] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183f00 00:25:14.796 [2024-11-20 16:15:45.361082] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.796 [2024-11-20 16:15:45.361087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:25:14.796 [2024-11-20 16:15:45.361096] nvme_ctrlr.c:1972:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:25:14.796 [2024-11-20 16:15:45.361102] nvme_ctrlr.c:1976:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:25:14.796 [2024-11-20 16:15:45.361107] nvme_ctrlr.c:1979:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:25:14.796 [2024-11-20 16:15:45.361113] nvme_ctrlr.c:2003:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:25:14.796 [2024-11-20 16:15:45.361118] nvme_ctrlr.c:2018:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:25:14.796 [2024-11-20 16:15:45.361124] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:25:14.796 [2024-11-20 16:15:45.361130] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361139] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:25:14.796 [2024-11-20 16:15:45.361147] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361154] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.796 [2024-11-20 16:15:45.361174] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.796 [2024-11-20 16:15:45.361180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:25:14.796 [2024-11-20 16:15:45.361187] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0540 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.796 [2024-11-20 16:15:45.361201] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0680 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.796 [2024-11-20 16:15:45.361216] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361223] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.796 [2024-11-20 16:15:45.361230] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.796 [2024-11-20 16:15:45.361243] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:14.796 [2024-11-20 16:15:45.361248] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361258] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:14.796 [2024-11-20 16:15:45.361266] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.796 [2024-11-20 16:15:45.361273] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:0 cdw10:0000000f SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.796 [2024-11-20 16:15:45.361297] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:2710 sqhd:000a p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361309] nvme_ctrlr.c:2890:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:25:14.797 [2024-11-20 16:15:45.361315] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361321] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361328] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361336] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361344] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361351] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.797 [2024-11-20 16:15:45.361373] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:7e007e sqhd:000b p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361428] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361434] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361441] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361450] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cc000 len:0x1000 key:0x183f00 00:25:14.797 [2024-11-20 16:15:45.361481] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361503] nvme_ctrlr.c:4556:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:25:14.797 [2024-11-20 16:15:45.361515] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361526] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361534] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361542] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000000 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183f00 00:25:14.797 [2024-11-20 16:15:45.361583] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361601] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361608] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361615] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361624] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361631] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:1 cdw10:00000003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x1000 key:0x183f00 00:25:14.797 [2024-11-20 16:15:45.361659] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361673] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361679] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361686] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361695] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361702] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361708] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361714] nvme_ctrlr.c:2978:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:25:14.797 [2024-11-20 16:15:45.361720] nvme_ctrlr.c:1472:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:25:14.797 [2024-11-20 16:15:45.361726] nvme_ctrlr.c:1478:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:25:14.797 [2024-11-20 16:15:45.361741] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361749] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:0 cdw10:00000001 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.797 [2024-11-20 16:15:45.361758] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.797 [2024-11-20 16:15:45.361775] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361787] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361793] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361805] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361814] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361821] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.797 [2024-11-20 16:15:45.361838] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361850] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361859] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361867] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.797 [2024-11-20 16:15:45.361884] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361896] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361905] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361912] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.797 [2024-11-20 16:15:45.361932] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.361938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:7e007e sqhd:0013 p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.361944] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361955] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0a40 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361963] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c9000 len:0x2000 key:0x183f00 00:25:14.797 [2024-11-20 16:15:45.361971] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0400 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:0 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003cf000 len:0x200 key:0x183f00 00:25:14.797 [2024-11-20 16:15:45.361987] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0b80 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.361995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003ce000 len:0x200 key:0x183f00 00:25:14.797 [2024-11-20 16:15:45.362004] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.362011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL KEYED DATA BLOCK ADDRESS 0x2000003c7000 len:0x1000 key:0x183f00 00:25:14.797 [2024-11-20 16:15:45.362020] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.362025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.362037] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.362044] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.362049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:0 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.362058] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.362064] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.797 [2024-11-20 16:15:45.362070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:6 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:25:14.797 [2024-11-20 16:15:45.362077] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183f00 00:25:14.797 [2024-11-20 16:15:45.362083] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.798 [2024-11-20 16:15:45.362088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:25:14.798 [2024-11-20 16:15:45.362098] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183f00 00:25:14.798 ===================================================== 00:25:14.798 NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.798 ===================================================== 00:25:14.798 Controller Capabilities/Features 00:25:14.798 ================================ 00:25:14.798 Vendor ID: 8086 00:25:14.798 Subsystem Vendor ID: 8086 00:25:14.798 Serial Number: SPDK00000000000001 00:25:14.798 Model Number: SPDK bdev Controller 00:25:14.798 Firmware Version: 24.01.1 00:25:14.798 Recommended Arb Burst: 6 00:25:14.798 IEEE OUI Identifier: e4 d2 5c 00:25:14.798 Multi-path I/O 00:25:14.798 May have multiple subsystem ports: Yes 00:25:14.798 May have multiple controllers: Yes 00:25:14.798 Associated with SR-IOV VF: No 00:25:14.798 Max Data Transfer Size: 131072 00:25:14.798 Max Number of Namespaces: 32 00:25:14.798 Max Number of I/O Queues: 127 00:25:14.798 NVMe Specification Version (VS): 1.3 00:25:14.798 NVMe Specification Version (Identify): 1.3 00:25:14.798 Maximum Queue Entries: 128 00:25:14.798 Contiguous Queues Required: Yes 00:25:14.798 Arbitration Mechanisms Supported 00:25:14.798 Weighted Round Robin: Not Supported 00:25:14.798 Vendor Specific: Not Supported 00:25:14.798 Reset Timeout: 15000 ms 00:25:14.798 Doorbell Stride: 4 bytes 00:25:14.798 NVM Subsystem Reset: Not Supported 00:25:14.798 Command Sets Supported 00:25:14.798 NVM Command Set: Supported 00:25:14.798 Boot Partition: Not Supported 00:25:14.798 Memory Page Size Minimum: 4096 bytes 00:25:14.798 Memory Page Size Maximum: 4096 bytes 00:25:14.798 Persistent Memory Region: Not Supported 00:25:14.798 Optional Asynchronous Events Supported 00:25:14.798 Namespace Attribute Notices: Supported 00:25:14.798 Firmware Activation Notices: Not Supported 00:25:14.798 ANA Change Notices: Not Supported 00:25:14.798 PLE Aggregate Log Change Notices: Not Supported 00:25:14.798 LBA Status Info Alert Notices: Not Supported 00:25:14.798 EGE Aggregate Log Change Notices: Not Supported 00:25:14.798 Normal NVM Subsystem Shutdown event: Not Supported 00:25:14.798 Zone Descriptor Change Notices: Not Supported 00:25:14.798 Discovery Log Change Notices: Not Supported 00:25:14.798 Controller Attributes 00:25:14.798 128-bit Host Identifier: Supported 00:25:14.798 Non-Operational Permissive Mode: Not Supported 00:25:14.798 NVM Sets: Not Supported 00:25:14.798 Read Recovery Levels: Not Supported 00:25:14.798 Endurance Groups: Not Supported 00:25:14.798 Predictable Latency Mode: Not Supported 00:25:14.798 Traffic Based Keep ALive: Not Supported 00:25:14.798 Namespace Granularity: Not Supported 00:25:14.798 SQ Associations: Not Supported 00:25:14.798 UUID List: Not Supported 00:25:14.798 Multi-Domain Subsystem: Not Supported 00:25:14.798 Fixed Capacity Management: Not Supported 00:25:14.798 Variable Capacity Management: Not Supported 00:25:14.798 Delete Endurance Group: Not Supported 00:25:14.798 Delete NVM Set: Not Supported 00:25:14.798 Extended LBA Formats Supported: Not Supported 00:25:14.798 Flexible Data Placement Supported: Not Supported 00:25:14.798 00:25:14.798 Controller Memory Buffer Support 00:25:14.798 ================================ 00:25:14.798 Supported: No 00:25:14.798 00:25:14.798 Persistent Memory Region Support 00:25:14.798 ================================ 00:25:14.798 Supported: No 00:25:14.798 00:25:14.798 Admin Command Set Attributes 00:25:14.798 ============================ 00:25:14.798 Security Send/Receive: Not Supported 00:25:14.798 Format NVM: Not Supported 00:25:14.798 Firmware Activate/Download: Not Supported 00:25:14.798 Namespace Management: Not Supported 00:25:14.798 Device Self-Test: Not Supported 00:25:14.798 Directives: Not Supported 00:25:14.798 NVMe-MI: Not Supported 00:25:14.798 Virtualization Management: Not Supported 00:25:14.798 Doorbell Buffer Config: Not Supported 00:25:14.798 Get LBA Status Capability: Not Supported 00:25:14.798 Command & Feature Lockdown Capability: Not Supported 00:25:14.798 Abort Command Limit: 4 00:25:14.798 Async Event Request Limit: 4 00:25:14.798 Number of Firmware Slots: N/A 00:25:14.798 Firmware Slot 1 Read-Only: N/A 00:25:14.798 Firmware Activation Without Reset: N/A 00:25:14.798 Multiple Update Detection Support: N/A 00:25:14.798 Firmware Update Granularity: No Information Provided 00:25:14.798 Per-Namespace SMART Log: No 00:25:14.798 Asymmetric Namespace Access Log Page: Not Supported 00:25:14.798 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:14.798 Command Effects Log Page: Supported 00:25:14.798 Get Log Page Extended Data: Supported 00:25:14.798 Telemetry Log Pages: Not Supported 00:25:14.798 Persistent Event Log Pages: Not Supported 00:25:14.798 Supported Log Pages Log Page: May Support 00:25:14.798 Commands Supported & Effects Log Page: Not Supported 00:25:14.798 Feature Identifiers & Effects Log Page:May Support 00:25:14.798 NVMe-MI Commands & Effects Log Page: May Support 00:25:14.798 Data Area 4 for Telemetry Log: Not Supported 00:25:14.798 Error Log Page Entries Supported: 128 00:25:14.798 Keep Alive: Supported 00:25:14.798 Keep Alive Granularity: 10000 ms 00:25:14.798 00:25:14.798 NVM Command Set Attributes 00:25:14.798 ========================== 00:25:14.798 Submission Queue Entry Size 00:25:14.798 Max: 64 00:25:14.798 Min: 64 00:25:14.798 Completion Queue Entry Size 00:25:14.798 Max: 16 00:25:14.798 Min: 16 00:25:14.798 Number of Namespaces: 32 00:25:14.798 Compare Command: Supported 00:25:14.798 Write Uncorrectable Command: Not Supported 00:25:14.798 Dataset Management Command: Supported 00:25:14.798 Write Zeroes Command: Supported 00:25:14.798 Set Features Save Field: Not Supported 00:25:14.798 Reservations: Supported 00:25:14.798 Timestamp: Not Supported 00:25:14.798 Copy: Supported 00:25:14.798 Volatile Write Cache: Present 00:25:14.798 Atomic Write Unit (Normal): 1 00:25:14.798 Atomic Write Unit (PFail): 1 00:25:14.798 Atomic Compare & Write Unit: 1 00:25:14.798 Fused Compare & Write: Supported 00:25:14.798 Scatter-Gather List 00:25:14.798 SGL Command Set: Supported 00:25:14.798 SGL Keyed: Supported 00:25:14.798 SGL Bit Bucket Descriptor: Not Supported 00:25:14.798 SGL Metadata Pointer: Not Supported 00:25:14.798 Oversized SGL: Not Supported 00:25:14.798 SGL Metadata Address: Not Supported 00:25:14.798 SGL Offset: Supported 00:25:14.798 Transport SGL Data Block: Not Supported 00:25:14.798 Replay Protected Memory Block: Not Supported 00:25:14.798 00:25:14.798 Firmware Slot Information 00:25:14.798 ========================= 00:25:14.798 Active slot: 1 00:25:14.798 Slot 1 Firmware Revision: 24.01.1 00:25:14.798 00:25:14.798 00:25:14.798 Commands Supported and Effects 00:25:14.798 ============================== 00:25:14.798 Admin Commands 00:25:14.798 -------------- 00:25:14.798 Get Log Page (02h): Supported 00:25:14.798 Identify (06h): Supported 00:25:14.798 Abort (08h): Supported 00:25:14.798 Set Features (09h): Supported 00:25:14.798 Get Features (0Ah): Supported 00:25:14.798 Asynchronous Event Request (0Ch): Supported 00:25:14.798 Keep Alive (18h): Supported 00:25:14.798 I/O Commands 00:25:14.798 ------------ 00:25:14.798 Flush (00h): Supported LBA-Change 00:25:14.798 Write (01h): Supported LBA-Change 00:25:14.798 Read (02h): Supported 00:25:14.798 Compare (05h): Supported 00:25:14.798 Write Zeroes (08h): Supported LBA-Change 00:25:14.798 Dataset Management (09h): Supported LBA-Change 00:25:14.798 Copy (19h): Supported LBA-Change 00:25:14.798 Unknown (79h): Supported LBA-Change 00:25:14.798 Unknown (7Ah): Supported 00:25:14.798 00:25:14.798 Error Log 00:25:14.798 ========= 00:25:14.798 00:25:14.798 Arbitration 00:25:14.798 =========== 00:25:14.798 Arbitration Burst: 1 00:25:14.798 00:25:14.798 Power Management 00:25:14.798 ================ 00:25:14.798 Number of Power States: 1 00:25:14.798 Current Power State: Power State #0 00:25:14.798 Power State #0: 00:25:14.798 Max Power: 0.00 W 00:25:14.798 Non-Operational State: Operational 00:25:14.798 Entry Latency: Not Reported 00:25:14.798 Exit Latency: Not Reported 00:25:14.798 Relative Read Throughput: 0 00:25:14.798 Relative Read Latency: 0 00:25:14.798 Relative Write Throughput: 0 00:25:14.798 Relative Write Latency: 0 00:25:14.798 Idle Power: Not Reported 00:25:14.798 Active Power: Not Reported 00:25:14.798 Non-Operational Permissive Mode: Not Supported 00:25:14.798 00:25:14.798 Health Information 00:25:14.798 ================== 00:25:14.798 Critical Warnings: 00:25:14.798 Available Spare Space: OK 00:25:14.798 Temperature: OK 00:25:14.798 Device Reliability: OK 00:25:14.798 Read Only: No 00:25:14.799 Volatile Memory Backup: OK 00:25:14.799 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:14.799 Temperature Threshol[2024-11-20 16:15:45.362182] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0cc0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362191] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362209] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:7 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362220] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362246] nvme_ctrlr.c:4220:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:25:14.799 [2024-11-20 16:15:45.362255] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47123 doesn't match qid 00:25:14.799 [2024-11-20 16:15:45.362270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32529 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362277] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47123 doesn't match qid 00:25:14.799 [2024-11-20 16:15:45.362285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32529 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362291] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47123 doesn't match qid 00:25:14.799 [2024-11-20 16:15:45.362299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32529 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362305] nvme_qpair.c: 471:spdk_nvme_print_completion: *ERROR*: sqid 47123 doesn't match qid 00:25:14.799 [2024-11-20 16:15:45.362312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:32529 cdw0:5 sqhd:7e28 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362321] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d0900 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362330] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:4 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362345] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:4 cdw0:460001 sqhd:0019 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362358] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362372] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362385] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362397] nvme_ctrlr.c:1070:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:25:14.799 [2024-11-20 16:15:45.362402] nvme_ctrlr.c:1073:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:25:14.799 [2024-11-20 16:15:45.362408] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362417] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362439] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362451] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362460] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362492] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362505] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362514] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362527] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362549] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362561] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362570] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362577] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362597] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362611] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362620] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362649] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362661] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362670] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362678] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362695] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362707] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362717] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362743] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362755] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362763] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362790] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362802] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362810] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362836] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362848] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362857] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362864] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362878] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362891] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362899] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362907] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362927] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362938] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362947] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.362968] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.799 [2024-11-20 16:15:45.362974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:14.799 [2024-11-20 16:15:45.362980] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362989] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.799 [2024-11-20 16:15:45.362996] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.799 [2024-11-20 16:15:45.363012] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363024] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363032] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363040] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363057] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0008 p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363070] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363078] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363109] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0009 p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363121] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363130] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363137] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363162] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000a p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363174] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf850 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363182] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363213] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000b p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363225] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf878 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363234] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363265] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000c p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363277] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8a0 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363285] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363293] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363310] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000d p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363322] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8c8 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363331] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363353] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000e p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363365] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf8f0 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363373] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363400] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:000f p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363412] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf918 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363421] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363449] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0010 p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363461] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf940 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363469] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363477] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363500] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0011 p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363512] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf968 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363525] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363553] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0012 p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363564] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf990 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363573] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363600] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0013 p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363612] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9b8 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363621] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363628] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363646] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0014 p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363658] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf9e0 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363666] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.800 [2024-11-20 16:15:45.363693] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.800 [2024-11-20 16:15:45.363699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0015 p:0 m:0 dnr:0 00:25:14.800 [2024-11-20 16:15:45.363705] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa08 length 0x10 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363714] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.800 [2024-11-20 16:15:45.363723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.363739] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.363744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0016 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.363750] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa30 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363759] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.363782] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.363788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0017 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.363794] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa58 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363803] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363810] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.363828] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.363833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0018 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.363840] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfa80 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363848] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363856] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.363872] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.363877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0019 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.363883] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaa8 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363892] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363900] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.363917] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.363923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001a p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.363929] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfad0 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363937] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363945] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.363959] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.363965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001b p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.363971] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfaf8 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363979] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.363988] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364000] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001c p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364012] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb20 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364020] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364028] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364048] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001d p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364059] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb48 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364068] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364095] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001e p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364107] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cfb70 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364115] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364123] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364139] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:001f p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364150] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6c0 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364159] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364167] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364188] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0000 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364200] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf6e8 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364208] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364235] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0001 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364247] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf710 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364257] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364265] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364278] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0002 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364290] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf738 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364299] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364306] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364324] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0003 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364336] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf760 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364344] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364366] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0004 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364377] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf788 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364386] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364415] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0005 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364427] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7b0 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364435] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364466] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0006 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.364478] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf7d8 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364487] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.364494] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.801 [2024-11-20 16:15:45.364510] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.801 [2024-11-20 16:15:45.364515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:1 sqhd:0007 p:0 m:0 dnr:0 00:25:14.801 [2024-11-20 16:15:45.368529] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf800 length 0x10 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.368540] nvme_rdma.c:2329:nvme_rdma_qpair_submit_request: *DEBUG*: local addr 0x2000003d07c0 length 0x40 lkey 0x183f00 00:25:14.801 [2024-11-20 16:15:45.368548] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL KEYED DATA BLOCK ADDRESS 0x0 len:0x0 key:0x0 00:25:14.802 [2024-11-20 16:15:45.368567] nvme_rdma.c:2532:nvme_rdma_process_recv_completion: *DEBUG*: CQ recv completion 00:25:14.802 [2024-11-20 16:15:45.368572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:3 cdw0:9 sqhd:0008 p:0 m:0 dnr:0 00:25:14.802 [2024-11-20 16:15:45.368578] nvme_rdma.c:2425:nvme_rdma_request_ready: *DEBUG*: local addr 0x2000003cf828 length 0x10 lkey 0x183f00 00:25:14.802 [2024-11-20 16:15:45.368585] nvme_ctrlr.c:1192:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:25:14.802 d: 0 Kelvin (-273 Celsius) 00:25:14.802 Available Spare: 0% 00:25:14.802 Available Spare Threshold: 0% 00:25:14.802 Life Percentage Used: 0% 00:25:14.802 Data Units Read: 0 00:25:14.802 Data Units Written: 0 00:25:14.802 Host Read Commands: 0 00:25:14.802 Host Write Commands: 0 00:25:14.802 Controller Busy Time: 0 minutes 00:25:14.802 Power Cycles: 0 00:25:14.802 Power On Hours: 0 hours 00:25:14.802 Unsafe Shutdowns: 0 00:25:14.802 Unrecoverable Media Errors: 0 00:25:14.802 Lifetime Error Log Entries: 0 00:25:14.802 Warning Temperature Time: 0 minutes 00:25:14.802 Critical Temperature Time: 0 minutes 00:25:14.802 00:25:14.802 Number of Queues 00:25:14.802 ================ 00:25:14.802 Number of I/O Submission Queues: 127 00:25:14.802 Number of I/O Completion Queues: 127 00:25:14.802 00:25:14.802 Active Namespaces 00:25:14.802 ================= 00:25:14.802 Namespace ID:1 00:25:14.802 Error Recovery Timeout: Unlimited 00:25:14.802 Command Set Identifier: NVM (00h) 00:25:14.802 Deallocate: Supported 00:25:14.802 Deallocated/Unwritten Error: Not Supported 00:25:14.802 Deallocated Read Value: Unknown 00:25:14.802 Deallocate in Write Zeroes: Not Supported 00:25:14.802 Deallocated Guard Field: 0xFFFF 00:25:14.802 Flush: Supported 00:25:14.802 Reservation: Supported 00:25:14.802 Namespace Sharing Capabilities: Multiple Controllers 00:25:14.802 Size (in LBAs): 131072 (0GiB) 00:25:14.802 Capacity (in LBAs): 131072 (0GiB) 00:25:14.802 Utilization (in LBAs): 131072 (0GiB) 00:25:14.802 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:14.802 EUI64: ABCDEF0123456789 00:25:14.802 UUID: 2a501949-4597-4957-9174-0295b9b955dd 00:25:14.802 Thin Provisioning: Not Supported 00:25:14.802 Per-NS Atomic Units: Yes 00:25:14.802 Atomic Boundary Size (Normal): 0 00:25:14.802 Atomic Boundary Size (PFail): 0 00:25:14.802 Atomic Boundary Offset: 0 00:25:14.802 Maximum Single Source Range Length: 65535 00:25:14.802 Maximum Copy Length: 65535 00:25:14.802 Maximum Source Range Count: 1 00:25:14.802 NGUID/EUI64 Never Reused: No 00:25:14.802 Namespace Write Protected: No 00:25:14.802 Number of LBA Formats: 1 00:25:14.802 Current LBA Format: LBA Format #00 00:25:14.802 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:14.802 00:25:14.802 16:15:45 -- host/identify.sh@51 -- # sync 00:25:14.802 16:15:45 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:14.802 16:15:45 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:14.802 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:25:14.802 16:15:45 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:14.802 16:15:45 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:14.802 16:15:45 -- host/identify.sh@56 -- # nvmftestfini 00:25:14.802 16:15:45 -- nvmf/common.sh@476 -- # nvmfcleanup 00:25:14.802 16:15:45 -- nvmf/common.sh@116 -- # sync 00:25:14.802 16:15:45 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:25:14.802 16:15:45 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:25:14.802 16:15:45 -- nvmf/common.sh@119 -- # set +e 00:25:14.802 16:15:45 -- nvmf/common.sh@120 -- # for i in {1..20} 00:25:14.802 16:15:45 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:25:14.802 rmmod nvme_rdma 00:25:14.802 rmmod nvme_fabrics 00:25:14.802 16:15:45 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:25:14.802 16:15:45 -- nvmf/common.sh@123 -- # set -e 00:25:14.802 16:15:45 -- nvmf/common.sh@124 -- # return 0 00:25:14.802 16:15:45 -- nvmf/common.sh@477 -- # '[' -n 1453992 ']' 00:25:14.802 16:15:45 -- nvmf/common.sh@478 -- # killprocess 1453992 00:25:14.802 16:15:45 -- common/autotest_common.sh@936 -- # '[' -z 1453992 ']' 00:25:14.802 16:15:45 -- common/autotest_common.sh@940 -- # kill -0 1453992 00:25:14.802 16:15:45 -- common/autotest_common.sh@941 -- # uname 00:25:14.802 16:15:45 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:14.802 16:15:45 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1453992 00:25:14.802 16:15:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:14.802 16:15:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:14.802 16:15:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1453992' 00:25:14.802 killing process with pid 1453992 00:25:14.802 16:15:45 -- common/autotest_common.sh@955 -- # kill 1453992 00:25:14.802 [2024-11-20 16:15:45.546664] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:25:14.802 16:15:45 -- common/autotest_common.sh@960 -- # wait 1453992 00:25:15.061 16:15:45 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:25:15.061 16:15:45 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:25:15.061 00:25:15.061 real 0m8.055s 00:25:15.061 user 0m8.179s 00:25:15.061 sys 0m5.090s 00:25:15.061 16:15:45 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:25:15.061 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:25:15.061 ************************************ 00:25:15.061 END TEST nvmf_identify 00:25:15.061 ************************************ 00:25:15.061 16:15:45 -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:15.061 16:15:45 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:15.061 16:15:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:15.061 16:15:45 -- common/autotest_common.sh@10 -- # set +x 00:25:15.061 ************************************ 00:25:15.061 START TEST nvmf_perf 00:25:15.061 ************************************ 00:25:15.062 16:15:45 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=rdma 00:25:15.321 * Looking for test storage... 00:25:15.321 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:25:15.321 16:15:45 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:25:15.321 16:15:45 -- common/autotest_common.sh@1690 -- # lcov --version 00:25:15.321 16:15:45 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:25:15.321 16:15:46 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:25:15.321 16:15:46 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:25:15.321 16:15:46 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:25:15.321 16:15:46 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:25:15.321 16:15:46 -- scripts/common.sh@335 -- # IFS=.-: 00:25:15.321 16:15:46 -- scripts/common.sh@335 -- # read -ra ver1 00:25:15.321 16:15:46 -- scripts/common.sh@336 -- # IFS=.-: 00:25:15.321 16:15:46 -- scripts/common.sh@336 -- # read -ra ver2 00:25:15.321 16:15:46 -- scripts/common.sh@337 -- # local 'op=<' 00:25:15.321 16:15:46 -- scripts/common.sh@339 -- # ver1_l=2 00:25:15.321 16:15:46 -- scripts/common.sh@340 -- # ver2_l=1 00:25:15.321 16:15:46 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:25:15.321 16:15:46 -- scripts/common.sh@343 -- # case "$op" in 00:25:15.321 16:15:46 -- scripts/common.sh@344 -- # : 1 00:25:15.321 16:15:46 -- scripts/common.sh@363 -- # (( v = 0 )) 00:25:15.321 16:15:46 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:15.321 16:15:46 -- scripts/common.sh@364 -- # decimal 1 00:25:15.321 16:15:46 -- scripts/common.sh@352 -- # local d=1 00:25:15.321 16:15:46 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:15.321 16:15:46 -- scripts/common.sh@354 -- # echo 1 00:25:15.321 16:15:46 -- scripts/common.sh@364 -- # ver1[v]=1 00:25:15.321 16:15:46 -- scripts/common.sh@365 -- # decimal 2 00:25:15.321 16:15:46 -- scripts/common.sh@352 -- # local d=2 00:25:15.321 16:15:46 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:15.321 16:15:46 -- scripts/common.sh@354 -- # echo 2 00:25:15.321 16:15:46 -- scripts/common.sh@365 -- # ver2[v]=2 00:25:15.321 16:15:46 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:25:15.321 16:15:46 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:25:15.321 16:15:46 -- scripts/common.sh@367 -- # return 0 00:25:15.321 16:15:46 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:15.321 16:15:46 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:25:15.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.321 --rc genhtml_branch_coverage=1 00:25:15.321 --rc genhtml_function_coverage=1 00:25:15.321 --rc genhtml_legend=1 00:25:15.321 --rc geninfo_all_blocks=1 00:25:15.321 --rc geninfo_unexecuted_blocks=1 00:25:15.321 00:25:15.321 ' 00:25:15.321 16:15:46 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:25:15.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.321 --rc genhtml_branch_coverage=1 00:25:15.321 --rc genhtml_function_coverage=1 00:25:15.321 --rc genhtml_legend=1 00:25:15.321 --rc geninfo_all_blocks=1 00:25:15.321 --rc geninfo_unexecuted_blocks=1 00:25:15.321 00:25:15.321 ' 00:25:15.321 16:15:46 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:25:15.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.321 --rc genhtml_branch_coverage=1 00:25:15.321 --rc genhtml_function_coverage=1 00:25:15.321 --rc genhtml_legend=1 00:25:15.321 --rc geninfo_all_blocks=1 00:25:15.322 --rc geninfo_unexecuted_blocks=1 00:25:15.322 00:25:15.322 ' 00:25:15.322 16:15:46 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:25:15.322 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:15.322 --rc genhtml_branch_coverage=1 00:25:15.322 --rc genhtml_function_coverage=1 00:25:15.322 --rc genhtml_legend=1 00:25:15.322 --rc geninfo_all_blocks=1 00:25:15.322 --rc geninfo_unexecuted_blocks=1 00:25:15.322 00:25:15.322 ' 00:25:15.322 16:15:46 -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:25:15.322 16:15:46 -- nvmf/common.sh@7 -- # uname -s 00:25:15.322 16:15:46 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:15.322 16:15:46 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:15.322 16:15:46 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:15.322 16:15:46 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:15.322 16:15:46 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:15.322 16:15:46 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:15.322 16:15:46 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:15.322 16:15:46 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:15.322 16:15:46 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:15.322 16:15:46 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:15.322 16:15:46 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:25:15.322 16:15:46 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:25:15.322 16:15:46 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:15.322 16:15:46 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:15.322 16:15:46 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:15.322 16:15:46 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:25:15.322 16:15:46 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:15.322 16:15:46 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:15.322 16:15:46 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:15.322 16:15:46 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.322 16:15:46 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.322 16:15:46 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.322 16:15:46 -- paths/export.sh@5 -- # export PATH 00:25:15.322 16:15:46 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:15.322 16:15:46 -- nvmf/common.sh@46 -- # : 0 00:25:15.322 16:15:46 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:25:15.322 16:15:46 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:25:15.322 16:15:46 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:25:15.322 16:15:46 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:15.322 16:15:46 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:15.322 16:15:46 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:25:15.322 16:15:46 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:25:15.322 16:15:46 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:25:15.322 16:15:46 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:15.322 16:15:46 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:15.322 16:15:46 -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:25:15.322 16:15:46 -- host/perf.sh@17 -- # nvmftestinit 00:25:15.322 16:15:46 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:25:15.322 16:15:46 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:15.322 16:15:46 -- nvmf/common.sh@436 -- # prepare_net_devs 00:25:15.322 16:15:46 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:25:15.322 16:15:46 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:25:15.322 16:15:46 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.322 16:15:46 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:15.322 16:15:46 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:15.322 16:15:46 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:25:15.322 16:15:46 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:25:15.322 16:15:46 -- nvmf/common.sh@284 -- # xtrace_disable 00:25:15.322 16:15:46 -- common/autotest_common.sh@10 -- # set +x 00:25:21.897 16:15:52 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:25:21.897 16:15:52 -- nvmf/common.sh@290 -- # pci_devs=() 00:25:21.897 16:15:52 -- nvmf/common.sh@290 -- # local -a pci_devs 00:25:21.897 16:15:52 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:25:21.897 16:15:52 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:25:21.897 16:15:52 -- nvmf/common.sh@292 -- # pci_drivers=() 00:25:21.897 16:15:52 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:25:21.897 16:15:52 -- nvmf/common.sh@294 -- # net_devs=() 00:25:21.897 16:15:52 -- nvmf/common.sh@294 -- # local -ga net_devs 00:25:21.897 16:15:52 -- nvmf/common.sh@295 -- # e810=() 00:25:21.897 16:15:52 -- nvmf/common.sh@295 -- # local -ga e810 00:25:21.897 16:15:52 -- nvmf/common.sh@296 -- # x722=() 00:25:21.897 16:15:52 -- nvmf/common.sh@296 -- # local -ga x722 00:25:21.897 16:15:52 -- nvmf/common.sh@297 -- # mlx=() 00:25:21.897 16:15:52 -- nvmf/common.sh@297 -- # local -ga mlx 00:25:21.897 16:15:52 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.897 16:15:52 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:25:21.897 16:15:52 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:25:21.897 16:15:52 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:25:21.897 16:15:52 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:25:21.897 16:15:52 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:25:21.897 16:15:52 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:25:21.898 16:15:52 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:25:21.898 16:15:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:25:21.898 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:25:21.898 16:15:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:21.898 16:15:52 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:25:21.898 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:25:21.898 16:15:52 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:25:21.898 16:15:52 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:25:21.898 16:15:52 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.898 16:15:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.898 16:15:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.898 16:15:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:25:21.898 Found net devices under 0000:d9:00.0: mlx_0_0 00:25:21.898 16:15:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.898 16:15:52 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.898 16:15:52 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:25:21.898 16:15:52 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.898 16:15:52 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:25:21.898 Found net devices under 0000:d9:00.1: mlx_0_1 00:25:21.898 16:15:52 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.898 16:15:52 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:25:21.898 16:15:52 -- nvmf/common.sh@402 -- # is_hw=yes 00:25:21.898 16:15:52 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@408 -- # rdma_device_init 00:25:21.898 16:15:52 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:25:21.898 16:15:52 -- nvmf/common.sh@57 -- # uname 00:25:21.898 16:15:52 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:25:21.898 16:15:52 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:25:21.898 16:15:52 -- nvmf/common.sh@62 -- # modprobe ib_core 00:25:21.898 16:15:52 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:25:21.898 16:15:52 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:25:21.898 16:15:52 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:25:21.898 16:15:52 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:25:21.898 16:15:52 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:25:21.898 16:15:52 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:25:21.898 16:15:52 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:25:21.898 16:15:52 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:25:21.898 16:15:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:21.898 16:15:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:21.898 16:15:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:21.898 16:15:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:21.898 16:15:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:21.898 16:15:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:21.898 16:15:52 -- nvmf/common.sh@104 -- # continue 2 00:25:21.898 16:15:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:21.898 16:15:52 -- nvmf/common.sh@104 -- # continue 2 00:25:21.898 16:15:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:21.898 16:15:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:25:21.898 16:15:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:21.898 16:15:52 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:25:21.898 16:15:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:25:21.898 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:21.898 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:25:21.898 altname enp217s0f0np0 00:25:21.898 altname ens818f0np0 00:25:21.898 inet 192.168.100.8/24 scope global mlx_0_0 00:25:21.898 valid_lft forever preferred_lft forever 00:25:21.898 16:15:52 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:25:21.898 16:15:52 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:25:21.898 16:15:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:21.898 16:15:52 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:25:21.898 16:15:52 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:25:21.898 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:25:21.898 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:25:21.898 altname enp217s0f1np1 00:25:21.898 altname ens818f1np1 00:25:21.898 inet 192.168.100.9/24 scope global mlx_0_1 00:25:21.898 valid_lft forever preferred_lft forever 00:25:21.898 16:15:52 -- nvmf/common.sh@410 -- # return 0 00:25:21.898 16:15:52 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:25:21.898 16:15:52 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:25:21.898 16:15:52 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:25:21.898 16:15:52 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:25:21.898 16:15:52 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:25:21.898 16:15:52 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:25:21.898 16:15:52 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:25:21.898 16:15:52 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:25:21.898 16:15:52 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:25:21.898 16:15:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:25:21.898 16:15:52 -- nvmf/common.sh@104 -- # continue 2 00:25:21.898 16:15:52 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:25:21.898 16:15:52 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:25:21.898 16:15:52 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:25:21.898 16:15:52 -- nvmf/common.sh@104 -- # continue 2 00:25:21.898 16:15:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:21.898 16:15:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:25:21.898 16:15:52 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:21.898 16:15:52 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:25:21.898 16:15:52 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:25:21.898 16:15:52 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:25:21.898 16:15:52 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:25:21.898 16:15:52 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:25:21.898 192.168.100.9' 00:25:21.898 16:15:52 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:25:21.898 192.168.100.9' 00:25:21.898 16:15:52 -- nvmf/common.sh@445 -- # head -n 1 00:25:21.898 16:15:52 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:25:21.898 16:15:52 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:25:21.898 192.168.100.9' 00:25:21.898 16:15:52 -- nvmf/common.sh@446 -- # tail -n +2 00:25:21.898 16:15:52 -- nvmf/common.sh@446 -- # head -n 1 00:25:21.898 16:15:52 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:25:21.898 16:15:52 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:25:21.898 16:15:52 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:25:21.899 16:15:52 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:25:21.899 16:15:52 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:25:21.899 16:15:52 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:25:21.899 16:15:52 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:21.899 16:15:52 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:25:21.899 16:15:52 -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:21.899 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:25:21.899 16:15:52 -- nvmf/common.sh@469 -- # nvmfpid=1457570 00:25:21.899 16:15:52 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.899 16:15:52 -- nvmf/common.sh@470 -- # waitforlisten 1457570 00:25:21.899 16:15:52 -- common/autotest_common.sh@829 -- # '[' -z 1457570 ']' 00:25:21.899 16:15:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.899 16:15:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:21.899 16:15:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.899 16:15:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:21.899 16:15:52 -- common/autotest_common.sh@10 -- # set +x 00:25:21.899 [2024-11-20 16:15:52.557826] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:25:21.899 [2024-11-20 16:15:52.557874] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.899 EAL: No free 2048 kB hugepages reported on node 1 00:25:21.899 [2024-11-20 16:15:52.627953] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.899 [2024-11-20 16:15:52.665365] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:25:21.899 [2024-11-20 16:15:52.665477] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.899 [2024-11-20 16:15:52.665489] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.899 [2024-11-20 16:15:52.665497] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.899 [2024-11-20 16:15:52.665550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.899 [2024-11-20 16:15:52.665645] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.899 [2024-11-20 16:15:52.665705] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.899 [2024-11-20 16:15:52.665707] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:22.838 16:15:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:22.838 16:15:53 -- common/autotest_common.sh@862 -- # return 0 00:25:22.838 16:15:53 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:25:22.838 16:15:53 -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:22.838 16:15:53 -- common/autotest_common.sh@10 -- # set +x 00:25:22.838 16:15:53 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:22.838 16:15:53 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:22.838 16:15:53 -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:26.131 16:15:56 -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:26.131 16:15:56 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:26.131 16:15:56 -- host/perf.sh@30 -- # local_nvme_trid=0000:d8:00.0 00:25:26.131 16:15:56 -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:26.131 16:15:56 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:26.131 16:15:56 -- host/perf.sh@33 -- # '[' -n 0000:d8:00.0 ']' 00:25:26.131 16:15:56 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:26.131 16:15:56 -- host/perf.sh@37 -- # '[' rdma == rdma ']' 00:25:26.131 16:15:56 -- host/perf.sh@40 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -c 0 00:25:26.390 [2024-11-20 16:15:57.039363] rdma.c:2780:nvmf_rdma_create: *WARNING*: In capsule data size is set to 256, this is minimum size required to support msdbd=16 00:25:26.390 [2024-11-20 16:15:57.060159] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1fd29c0/0x1fe0710) succeed. 00:25:26.391 [2024-11-20 16:15:57.069515] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fd3f60/0x2021db0) succeed. 00:25:26.391 16:15:57 -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:26.650 16:15:57 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:26.650 16:15:57 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:26.910 16:15:57 -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:26.910 16:15:57 -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:27.169 16:15:57 -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:27.169 [2024-11-20 16:15:57.896843] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:25:27.169 16:15:57 -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:25:27.427 16:15:58 -- host/perf.sh@52 -- # '[' -n 0000:d8:00.0 ']' 00:25:27.427 16:15:58 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:27.427 16:15:58 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:27.427 16:15:58 -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:d8:00.0' 00:25:28.807 Initializing NVMe Controllers 00:25:28.807 Attached to NVMe Controller at 0000:d8:00.0 [8086:0a54] 00:25:28.807 Associating PCIE (0000:d8:00.0) NSID 1 with lcore 0 00:25:28.807 Initialization complete. Launching workers. 00:25:28.807 ======================================================== 00:25:28.807 Latency(us) 00:25:28.807 Device Information : IOPS MiB/s Average min max 00:25:28.807 PCIE (0000:d8:00.0) NSID 1 from core 0: 102707.00 401.20 311.20 28.61 4235.65 00:25:28.807 ======================================================== 00:25:28.807 Total : 102707.00 401.20 311.20 28.61 4235.65 00:25:28.807 00:25:28.807 16:15:59 -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:28.807 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.098 Initializing NVMe Controllers 00:25:32.098 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:32.098 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:32.098 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:32.098 Initialization complete. Launching workers. 00:25:32.098 ======================================================== 00:25:32.098 Latency(us) 00:25:32.098 Device Information : IOPS MiB/s Average min max 00:25:32.098 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 6821.99 26.65 146.38 47.20 6013.82 00:25:32.098 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 5300.99 20.71 188.44 66.27 6039.56 00:25:32.098 ======================================================== 00:25:32.099 Total : 12122.99 47.36 164.77 47.20 6039.56 00:25:32.099 00:25:32.099 16:16:02 -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:32.099 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.390 Initializing NVMe Controllers 00:25:35.390 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.390 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:35.390 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:35.390 Initialization complete. Launching workers. 00:25:35.390 ======================================================== 00:25:35.390 Latency(us) 00:25:35.390 Device Information : IOPS MiB/s Average min max 00:25:35.390 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19544.98 76.35 1639.08 444.43 5399.29 00:25:35.390 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4032.00 15.75 7971.43 5929.46 10032.97 00:25:35.390 ======================================================== 00:25:35.390 Total : 23576.98 92.10 2722.00 444.43 10032.97 00:25:35.390 00:25:35.390 16:16:06 -- host/perf.sh@59 -- # [[ mlx5 == \e\8\1\0 ]] 00:25:35.390 16:16:06 -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:35.649 EAL: No free 2048 kB hugepages reported on node 1 00:25:39.845 Initializing NVMe Controllers 00:25:39.845 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:39.845 Controller IO queue size 128, less than required. 00:25:39.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:39.845 Controller IO queue size 128, less than required. 00:25:39.845 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:39.845 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:39.845 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:39.845 Initialization complete. Launching workers. 00:25:39.845 ======================================================== 00:25:39.845 Latency(us) 00:25:39.845 Device Information : IOPS MiB/s Average min max 00:25:39.845 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4085.50 1021.37 31514.36 14196.77 73158.00 00:25:39.845 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 4155.50 1038.87 30548.45 13552.27 47435.35 00:25:39.845 ======================================================== 00:25:39.845 Total : 8241.00 2060.25 31027.30 13552.27 73158.00 00:25:39.845 00:25:39.845 16:16:10 -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' -c 0xf -P 4 00:25:39.845 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.104 No valid NVMe controllers or AIO or URING devices found 00:25:40.104 Initializing NVMe Controllers 00:25:40.104 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:40.104 Controller IO queue size 128, less than required. 00:25:40.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.104 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:40.104 Controller IO queue size 128, less than required. 00:25:40.104 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:40.104 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:40.104 WARNING: Some requested NVMe devices were skipped 00:25:40.104 16:16:10 -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' --transport-stat 00:25:40.363 EAL: No free 2048 kB hugepages reported on node 1 00:25:44.560 Initializing NVMe Controllers 00:25:44.560 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:25:44.560 Controller IO queue size 128, less than required. 00:25:44.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:44.560 Controller IO queue size 128, less than required. 00:25:44.560 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:44.560 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:44.560 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:44.560 Initialization complete. Launching workers. 00:25:44.560 00:25:44.560 ==================== 00:25:44.560 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:44.560 RDMA transport: 00:25:44.560 dev name: mlx5_0 00:25:44.560 polls: 419983 00:25:44.560 idle_polls: 415933 00:25:44.560 completions: 46380 00:25:44.560 queued_requests: 1 00:25:44.560 total_send_wrs: 23254 00:25:44.560 send_doorbell_updates: 3841 00:25:44.560 total_recv_wrs: 23254 00:25:44.560 recv_doorbell_updates: 3842 00:25:44.560 --------------------------------- 00:25:44.560 00:25:44.560 ==================== 00:25:44.560 lcore 0, ns RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:44.560 RDMA transport: 00:25:44.560 dev name: mlx5_0 00:25:44.560 polls: 418565 00:25:44.560 idle_polls: 418281 00:25:44.560 completions: 20311 00:25:44.560 queued_requests: 1 00:25:44.560 total_send_wrs: 10219 00:25:44.560 send_doorbell_updates: 256 00:25:44.560 total_recv_wrs: 10219 00:25:44.560 recv_doorbell_updates: 256 00:25:44.560 --------------------------------- 00:25:44.560 ======================================================== 00:25:44.560 Latency(us) 00:25:44.560 Device Information : IOPS MiB/s Average min max 00:25:44.560 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5844.99 1461.25 21957.70 10652.92 52229.61 00:25:44.560 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2586.49 646.62 49352.29 28771.45 76312.34 00:25:44.560 ======================================================== 00:25:44.560 Total : 8431.48 2107.87 30361.44 10652.92 76312.34 00:25:44.560 00:25:44.560 16:16:15 -- host/perf.sh@66 -- # sync 00:25:44.560 16:16:15 -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.819 16:16:15 -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:25:44.819 16:16:15 -- host/perf.sh@71 -- # '[' -n 0000:d8:00.0 ']' 00:25:44.819 16:16:15 -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:25:51.389 16:16:22 -- host/perf.sh@72 -- # ls_guid=45f57e5a-3135-425f-a5d9-6f4f479a0d54 00:25:51.389 16:16:22 -- host/perf.sh@73 -- # get_lvs_free_mb 45f57e5a-3135-425f-a5d9-6f4f479a0d54 00:25:51.389 16:16:22 -- common/autotest_common.sh@1353 -- # local lvs_uuid=45f57e5a-3135-425f-a5d9-6f4f479a0d54 00:25:51.389 16:16:22 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:51.389 16:16:22 -- common/autotest_common.sh@1355 -- # local fc 00:25:51.389 16:16:22 -- common/autotest_common.sh@1356 -- # local cs 00:25:51.389 16:16:22 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:51.648 16:16:22 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:51.648 { 00:25:51.648 "uuid": "45f57e5a-3135-425f-a5d9-6f4f479a0d54", 00:25:51.648 "name": "lvs_0", 00:25:51.648 "base_bdev": "Nvme0n1", 00:25:51.648 "total_data_clusters": 476466, 00:25:51.648 "free_clusters": 476466, 00:25:51.648 "block_size": 512, 00:25:51.648 "cluster_size": 4194304 00:25:51.648 } 00:25:51.648 ]' 00:25:51.648 16:16:22 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="45f57e5a-3135-425f-a5d9-6f4f479a0d54") .free_clusters' 00:25:51.648 16:16:22 -- common/autotest_common.sh@1358 -- # fc=476466 00:25:51.648 16:16:22 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="45f57e5a-3135-425f-a5d9-6f4f479a0d54") .cluster_size' 00:25:51.648 16:16:22 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:51.648 16:16:22 -- common/autotest_common.sh@1362 -- # free_mb=1905864 00:25:51.648 16:16:22 -- common/autotest_common.sh@1363 -- # echo 1905864 00:25:51.648 1905864 00:25:51.648 16:16:22 -- host/perf.sh@77 -- # '[' 1905864 -gt 20480 ']' 00:25:51.648 16:16:22 -- host/perf.sh@78 -- # free_mb=20480 00:25:51.648 16:16:22 -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 45f57e5a-3135-425f-a5d9-6f4f479a0d54 lbd_0 20480 00:25:51.907 16:16:22 -- host/perf.sh@80 -- # lb_guid=261dcf68-16f5-4029-87b3-c22dc4d3909c 00:25:51.907 16:16:22 -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 261dcf68-16f5-4029-87b3-c22dc4d3909c lvs_n_0 00:25:53.288 16:16:24 -- host/perf.sh@83 -- # ls_nested_guid=b226cc87-d921-4f16-8a64-a1c907a79c85 00:25:53.288 16:16:24 -- host/perf.sh@84 -- # get_lvs_free_mb b226cc87-d921-4f16-8a64-a1c907a79c85 00:25:53.288 16:16:24 -- common/autotest_common.sh@1353 -- # local lvs_uuid=b226cc87-d921-4f16-8a64-a1c907a79c85 00:25:53.288 16:16:24 -- common/autotest_common.sh@1354 -- # local lvs_info 00:25:53.288 16:16:24 -- common/autotest_common.sh@1355 -- # local fc 00:25:53.288 16:16:24 -- common/autotest_common.sh@1356 -- # local cs 00:25:53.288 16:16:24 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:53.547 16:16:24 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:25:53.547 { 00:25:53.547 "uuid": "45f57e5a-3135-425f-a5d9-6f4f479a0d54", 00:25:53.547 "name": "lvs_0", 00:25:53.547 "base_bdev": "Nvme0n1", 00:25:53.547 "total_data_clusters": 476466, 00:25:53.547 "free_clusters": 471346, 00:25:53.547 "block_size": 512, 00:25:53.547 "cluster_size": 4194304 00:25:53.547 }, 00:25:53.547 { 00:25:53.547 "uuid": "b226cc87-d921-4f16-8a64-a1c907a79c85", 00:25:53.547 "name": "lvs_n_0", 00:25:53.547 "base_bdev": "261dcf68-16f5-4029-87b3-c22dc4d3909c", 00:25:53.547 "total_data_clusters": 5114, 00:25:53.547 "free_clusters": 5114, 00:25:53.547 "block_size": 512, 00:25:53.547 "cluster_size": 4194304 00:25:53.547 } 00:25:53.547 ]' 00:25:53.547 16:16:24 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="b226cc87-d921-4f16-8a64-a1c907a79c85") .free_clusters' 00:25:53.547 16:16:24 -- common/autotest_common.sh@1358 -- # fc=5114 00:25:53.547 16:16:24 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="b226cc87-d921-4f16-8a64-a1c907a79c85") .cluster_size' 00:25:53.547 16:16:24 -- common/autotest_common.sh@1359 -- # cs=4194304 00:25:53.547 16:16:24 -- common/autotest_common.sh@1362 -- # free_mb=20456 00:25:53.547 16:16:24 -- common/autotest_common.sh@1363 -- # echo 20456 00:25:53.548 20456 00:25:53.548 16:16:24 -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:25:53.548 16:16:24 -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b226cc87-d921-4f16-8a64-a1c907a79c85 lbd_nest_0 20456 00:25:53.807 16:16:24 -- host/perf.sh@88 -- # lb_nested_guid=8d0692ae-4dde-415e-8520-3f2d235d2a3b 00:25:53.807 16:16:24 -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:54.066 16:16:24 -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:25:54.066 16:16:24 -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 8d0692ae-4dde-415e-8520-3f2d235d2a3b 00:25:54.066 16:16:24 -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:25:54.326 16:16:25 -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:25:54.326 16:16:25 -- host/perf.sh@96 -- # io_size=("512" "131072") 00:25:54.326 16:16:25 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:25:54.326 16:16:25 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:25:54.327 16:16:25 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:25:54.327 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.540 Initializing NVMe Controllers 00:26:06.540 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:06.540 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:06.540 Initialization complete. Launching workers. 00:26:06.540 ======================================================== 00:26:06.540 Latency(us) 00:26:06.540 Device Information : IOPS MiB/s Average min max 00:26:06.540 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 5998.55 2.93 166.37 67.06 6062.87 00:26:06.540 ======================================================== 00:26:06.540 Total : 5998.55 2.93 166.37 67.06 6062.87 00:26:06.540 00:26:06.540 16:16:36 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:06.540 16:16:36 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:06.540 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.754 Initializing NVMe Controllers 00:26:18.754 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:18.754 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:18.754 Initialization complete. Launching workers. 00:26:18.754 ======================================================== 00:26:18.754 Latency(us) 00:26:18.754 Device Information : IOPS MiB/s Average min max 00:26:18.754 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2677.50 334.69 372.98 155.25 8151.24 00:26:18.754 ======================================================== 00:26:18.754 Total : 2677.50 334.69 372.98 155.25 8151.24 00:26:18.754 00:26:18.754 16:16:47 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:18.754 16:16:47 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:18.754 16:16:47 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:18.754 EAL: No free 2048 kB hugepages reported on node 1 00:26:28.741 Initializing NVMe Controllers 00:26:28.741 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:28.741 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:28.741 Initialization complete. Launching workers. 00:26:28.741 ======================================================== 00:26:28.741 Latency(us) 00:26:28.741 Device Information : IOPS MiB/s Average min max 00:26:28.741 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 12269.05 5.99 2608.24 870.62 8750.12 00:26:28.741 ======================================================== 00:26:28.741 Total : 12269.05 5.99 2608.24 870.62 8750.12 00:26:28.741 00:26:28.741 16:16:59 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:28.741 16:16:59 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:28.741 EAL: No free 2048 kB hugepages reported on node 1 00:26:40.964 Initializing NVMe Controllers 00:26:40.964 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:40.964 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:40.964 Initialization complete. Launching workers. 00:26:40.964 ======================================================== 00:26:40.964 Latency(us) 00:26:40.964 Device Information : IOPS MiB/s Average min max 00:26:40.964 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3982.98 497.87 8033.81 3930.42 16031.84 00:26:40.964 ======================================================== 00:26:40.964 Total : 3982.98 497.87 8033.81 3930.42 16031.84 00:26:40.964 00:26:40.964 16:17:10 -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:26:40.964 16:17:10 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:40.964 16:17:10 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:40.964 EAL: No free 2048 kB hugepages reported on node 1 00:26:51.121 Initializing NVMe Controllers 00:26:51.121 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:26:51.121 Controller IO queue size 128, less than required. 00:26:51.121 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:51.121 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:51.121 Initialization complete. Launching workers. 00:26:51.121 ======================================================== 00:26:51.121 Latency(us) 00:26:51.121 Device Information : IOPS MiB/s Average min max 00:26:51.121 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19925.20 9.73 6426.25 1928.83 14535.94 00:26:51.121 ======================================================== 00:26:51.121 Total : 19925.20 9.73 6426.25 1928.83 14535.94 00:26:51.121 00:26:51.121 16:17:21 -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:26:51.121 16:17:21 -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:26:51.121 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.340 Initializing NVMe Controllers 00:27:03.340 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:27:03.340 Controller IO queue size 128, less than required. 00:27:03.340 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:27:03.340 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:27:03.340 Initialization complete. Launching workers. 00:27:03.340 ======================================================== 00:27:03.340 Latency(us) 00:27:03.340 Device Information : IOPS MiB/s Average min max 00:27:03.340 RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11472.71 1434.09 11162.84 3009.44 23650.82 00:27:03.340 ======================================================== 00:27:03.340 Total : 11472.71 1434.09 11162.84 3009.44 23650.82 00:27:03.340 00:27:03.340 16:17:33 -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:03.340 16:17:33 -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8d0692ae-4dde-415e-8520-3f2d235d2a3b 00:27:03.340 16:17:34 -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:03.599 16:17:34 -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 261dcf68-16f5-4029-87b3-c22dc4d3909c 00:27:03.858 16:17:34 -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:03.858 16:17:34 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:27:03.858 16:17:34 -- host/perf.sh@114 -- # nvmftestfini 00:27:03.858 16:17:34 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:03.858 16:17:34 -- nvmf/common.sh@116 -- # sync 00:27:03.858 16:17:34 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:03.858 16:17:34 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:03.858 16:17:34 -- nvmf/common.sh@119 -- # set +e 00:27:03.858 16:17:34 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:03.858 16:17:34 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:03.858 rmmod nvme_rdma 00:27:04.118 rmmod nvme_fabrics 00:27:04.118 16:17:34 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:04.118 16:17:34 -- nvmf/common.sh@123 -- # set -e 00:27:04.118 16:17:34 -- nvmf/common.sh@124 -- # return 0 00:27:04.118 16:17:34 -- nvmf/common.sh@477 -- # '[' -n 1457570 ']' 00:27:04.118 16:17:34 -- nvmf/common.sh@478 -- # killprocess 1457570 00:27:04.118 16:17:34 -- common/autotest_common.sh@936 -- # '[' -z 1457570 ']' 00:27:04.118 16:17:34 -- common/autotest_common.sh@940 -- # kill -0 1457570 00:27:04.118 16:17:34 -- common/autotest_common.sh@941 -- # uname 00:27:04.118 16:17:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:04.118 16:17:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1457570 00:27:04.118 16:17:34 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:04.118 16:17:34 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:04.118 16:17:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1457570' 00:27:04.118 killing process with pid 1457570 00:27:04.118 16:17:34 -- common/autotest_common.sh@955 -- # kill 1457570 00:27:04.118 16:17:34 -- common/autotest_common.sh@960 -- # wait 1457570 00:27:06.656 16:17:37 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:06.656 16:17:37 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:06.656 00:27:06.656 real 1m51.441s 00:27:06.656 user 7m2.045s 00:27:06.656 sys 0m6.928s 00:27:06.656 16:17:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:06.656 16:17:37 -- common/autotest_common.sh@10 -- # set +x 00:27:06.656 ************************************ 00:27:06.656 END TEST nvmf_perf 00:27:06.656 ************************************ 00:27:06.656 16:17:37 -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:06.656 16:17:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:06.656 16:17:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:06.656 16:17:37 -- common/autotest_common.sh@10 -- # set +x 00:27:06.656 ************************************ 00:27:06.656 START TEST nvmf_fio_host 00:27:06.656 ************************************ 00:27:06.656 16:17:37 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=rdma 00:27:06.656 * Looking for test storage... 00:27:06.656 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:06.656 16:17:37 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:06.656 16:17:37 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:06.656 16:17:37 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:06.915 16:17:37 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:06.915 16:17:37 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:06.915 16:17:37 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:06.915 16:17:37 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:06.915 16:17:37 -- scripts/common.sh@335 -- # IFS=.-: 00:27:06.915 16:17:37 -- scripts/common.sh@335 -- # read -ra ver1 00:27:06.915 16:17:37 -- scripts/common.sh@336 -- # IFS=.-: 00:27:06.915 16:17:37 -- scripts/common.sh@336 -- # read -ra ver2 00:27:06.915 16:17:37 -- scripts/common.sh@337 -- # local 'op=<' 00:27:06.916 16:17:37 -- scripts/common.sh@339 -- # ver1_l=2 00:27:06.916 16:17:37 -- scripts/common.sh@340 -- # ver2_l=1 00:27:06.916 16:17:37 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:06.916 16:17:37 -- scripts/common.sh@343 -- # case "$op" in 00:27:06.916 16:17:37 -- scripts/common.sh@344 -- # : 1 00:27:06.916 16:17:37 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:06.916 16:17:37 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:06.916 16:17:37 -- scripts/common.sh@364 -- # decimal 1 00:27:06.916 16:17:37 -- scripts/common.sh@352 -- # local d=1 00:27:06.916 16:17:37 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:06.916 16:17:37 -- scripts/common.sh@354 -- # echo 1 00:27:06.916 16:17:37 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:06.916 16:17:37 -- scripts/common.sh@365 -- # decimal 2 00:27:06.916 16:17:37 -- scripts/common.sh@352 -- # local d=2 00:27:06.916 16:17:37 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:06.916 16:17:37 -- scripts/common.sh@354 -- # echo 2 00:27:06.916 16:17:37 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:06.916 16:17:37 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:06.916 16:17:37 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:06.916 16:17:37 -- scripts/common.sh@367 -- # return 0 00:27:06.916 16:17:37 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:06.916 16:17:37 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:06.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.916 --rc genhtml_branch_coverage=1 00:27:06.916 --rc genhtml_function_coverage=1 00:27:06.916 --rc genhtml_legend=1 00:27:06.916 --rc geninfo_all_blocks=1 00:27:06.916 --rc geninfo_unexecuted_blocks=1 00:27:06.916 00:27:06.916 ' 00:27:06.916 16:17:37 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:06.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.916 --rc genhtml_branch_coverage=1 00:27:06.916 --rc genhtml_function_coverage=1 00:27:06.916 --rc genhtml_legend=1 00:27:06.916 --rc geninfo_all_blocks=1 00:27:06.916 --rc geninfo_unexecuted_blocks=1 00:27:06.916 00:27:06.916 ' 00:27:06.916 16:17:37 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:06.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.916 --rc genhtml_branch_coverage=1 00:27:06.916 --rc genhtml_function_coverage=1 00:27:06.916 --rc genhtml_legend=1 00:27:06.916 --rc geninfo_all_blocks=1 00:27:06.916 --rc geninfo_unexecuted_blocks=1 00:27:06.916 00:27:06.916 ' 00:27:06.916 16:17:37 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:06.916 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:06.916 --rc genhtml_branch_coverage=1 00:27:06.916 --rc genhtml_function_coverage=1 00:27:06.916 --rc genhtml_legend=1 00:27:06.916 --rc geninfo_all_blocks=1 00:27:06.916 --rc geninfo_unexecuted_blocks=1 00:27:06.916 00:27:06.916 ' 00:27:06.916 16:17:37 -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:06.916 16:17:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.916 16:17:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.916 16:17:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.916 16:17:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.916 16:17:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.916 16:17:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.916 16:17:37 -- paths/export.sh@5 -- # export PATH 00:27:06.916 16:17:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.916 16:17:37 -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:06.916 16:17:37 -- nvmf/common.sh@7 -- # uname -s 00:27:06.916 16:17:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:06.916 16:17:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:06.916 16:17:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:06.916 16:17:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:06.916 16:17:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:06.916 16:17:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:06.916 16:17:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:06.916 16:17:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:06.916 16:17:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:06.916 16:17:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:06.916 16:17:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:06.916 16:17:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:06.916 16:17:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:06.916 16:17:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:06.916 16:17:37 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:06.916 16:17:37 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:06.916 16:17:37 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:06.916 16:17:37 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:06.916 16:17:37 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:06.916 16:17:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.916 16:17:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.916 16:17:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.916 16:17:37 -- paths/export.sh@5 -- # export PATH 00:27:06.916 16:17:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:06.916 16:17:37 -- nvmf/common.sh@46 -- # : 0 00:27:06.916 16:17:37 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:06.916 16:17:37 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:06.916 16:17:37 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:06.916 16:17:37 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:06.916 16:17:37 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:06.916 16:17:37 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:06.916 16:17:37 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:06.916 16:17:37 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:06.916 16:17:37 -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:06.916 16:17:37 -- host/fio.sh@14 -- # nvmftestinit 00:27:06.916 16:17:37 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:06.916 16:17:37 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:06.917 16:17:37 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:06.917 16:17:37 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:06.917 16:17:37 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:06.917 16:17:37 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:06.917 16:17:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:06.917 16:17:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:06.917 16:17:37 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:06.917 16:17:37 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:06.917 16:17:37 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:06.917 16:17:37 -- common/autotest_common.sh@10 -- # set +x 00:27:13.488 16:17:43 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:27:13.488 16:17:43 -- nvmf/common.sh@290 -- # pci_devs=() 00:27:13.488 16:17:43 -- nvmf/common.sh@290 -- # local -a pci_devs 00:27:13.488 16:17:43 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:27:13.488 16:17:43 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:27:13.488 16:17:43 -- nvmf/common.sh@292 -- # pci_drivers=() 00:27:13.488 16:17:43 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:27:13.488 16:17:43 -- nvmf/common.sh@294 -- # net_devs=() 00:27:13.488 16:17:43 -- nvmf/common.sh@294 -- # local -ga net_devs 00:27:13.488 16:17:43 -- nvmf/common.sh@295 -- # e810=() 00:27:13.488 16:17:43 -- nvmf/common.sh@295 -- # local -ga e810 00:27:13.488 16:17:43 -- nvmf/common.sh@296 -- # x722=() 00:27:13.488 16:17:43 -- nvmf/common.sh@296 -- # local -ga x722 00:27:13.488 16:17:43 -- nvmf/common.sh@297 -- # mlx=() 00:27:13.488 16:17:43 -- nvmf/common.sh@297 -- # local -ga mlx 00:27:13.488 16:17:43 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:13.488 16:17:43 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:27:13.488 16:17:43 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:27:13.488 16:17:43 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:27:13.488 16:17:43 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:27:13.488 16:17:43 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:27:13.488 16:17:43 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:27:13.488 16:17:43 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:27:13.488 16:17:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:13.488 16:17:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:27:13.488 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:27:13.488 16:17:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:13.488 16:17:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:13.488 16:17:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:13.488 16:17:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:13.488 16:17:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:13.488 16:17:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:13.489 16:17:43 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:27:13.489 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:27:13.489 16:17:43 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:27:13.489 16:17:43 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:27:13.489 16:17:43 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.489 16:17:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:13.489 16:17:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.489 16:17:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:27:13.489 Found net devices under 0000:d9:00.0: mlx_0_0 00:27:13.489 16:17:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.489 16:17:43 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:13.489 16:17:43 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:27:13.489 16:17:43 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:13.489 16:17:43 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:27:13.489 Found net devices under 0000:d9:00.1: mlx_0_1 00:27:13.489 16:17:43 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:27:13.489 16:17:43 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:27:13.489 16:17:43 -- nvmf/common.sh@402 -- # is_hw=yes 00:27:13.489 16:17:43 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@408 -- # rdma_device_init 00:27:13.489 16:17:43 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:27:13.489 16:17:43 -- nvmf/common.sh@57 -- # uname 00:27:13.489 16:17:43 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:27:13.489 16:17:43 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:27:13.489 16:17:43 -- nvmf/common.sh@62 -- # modprobe ib_core 00:27:13.489 16:17:43 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:27:13.489 16:17:43 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:27:13.489 16:17:43 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:27:13.489 16:17:43 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:27:13.489 16:17:43 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:27:13.489 16:17:43 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:27:13.489 16:17:43 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:27:13.489 16:17:43 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:27:13.489 16:17:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:13.489 16:17:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:13.489 16:17:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:13.489 16:17:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:13.489 16:17:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:13.489 16:17:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:13.489 16:17:43 -- nvmf/common.sh@104 -- # continue 2 00:27:13.489 16:17:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:13.489 16:17:43 -- nvmf/common.sh@104 -- # continue 2 00:27:13.489 16:17:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:13.489 16:17:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:27:13.489 16:17:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:13.489 16:17:43 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:27:13.489 16:17:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:27:13.489 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:13.489 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:27:13.489 altname enp217s0f0np0 00:27:13.489 altname ens818f0np0 00:27:13.489 inet 192.168.100.8/24 scope global mlx_0_0 00:27:13.489 valid_lft forever preferred_lft forever 00:27:13.489 16:17:43 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:27:13.489 16:17:43 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:27:13.489 16:17:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:13.489 16:17:43 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:27:13.489 16:17:43 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:27:13.489 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:27:13.489 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:27:13.489 altname enp217s0f1np1 00:27:13.489 altname ens818f1np1 00:27:13.489 inet 192.168.100.9/24 scope global mlx_0_1 00:27:13.489 valid_lft forever preferred_lft forever 00:27:13.489 16:17:43 -- nvmf/common.sh@410 -- # return 0 00:27:13.489 16:17:43 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:27:13.489 16:17:43 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:27:13.489 16:17:43 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:27:13.489 16:17:43 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:27:13.489 16:17:43 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:27:13.489 16:17:43 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:27:13.489 16:17:43 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:27:13.489 16:17:43 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:27:13.489 16:17:43 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:27:13.489 16:17:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:27:13.489 16:17:43 -- nvmf/common.sh@104 -- # continue 2 00:27:13.489 16:17:43 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:27:13.489 16:17:43 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:27:13.489 16:17:43 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:27:13.489 16:17:43 -- nvmf/common.sh@104 -- # continue 2 00:27:13.489 16:17:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:13.489 16:17:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:27:13.489 16:17:43 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:13.489 16:17:43 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:27:13.489 16:17:43 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:27:13.489 16:17:43 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:27:13.489 16:17:43 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:27:13.489 16:17:43 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:27:13.489 192.168.100.9' 00:27:13.489 16:17:43 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:27:13.489 192.168.100.9' 00:27:13.489 16:17:43 -- nvmf/common.sh@445 -- # head -n 1 00:27:13.489 16:17:43 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:27:13.489 16:17:43 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:27:13.489 192.168.100.9' 00:27:13.489 16:17:43 -- nvmf/common.sh@446 -- # tail -n +2 00:27:13.489 16:17:43 -- nvmf/common.sh@446 -- # head -n 1 00:27:13.489 16:17:43 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:27:13.489 16:17:43 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:27:13.489 16:17:43 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:27:13.489 16:17:43 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:27:13.489 16:17:43 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:27:13.489 16:17:43 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:27:13.489 16:17:43 -- host/fio.sh@16 -- # [[ y != y ]] 00:27:13.489 16:17:43 -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:27:13.489 16:17:43 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:13.489 16:17:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.489 16:17:43 -- host/fio.sh@24 -- # nvmfpid=1478447 00:27:13.489 16:17:43 -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:13.489 16:17:43 -- host/fio.sh@28 -- # waitforlisten 1478447 00:27:13.489 16:17:43 -- common/autotest_common.sh@829 -- # '[' -z 1478447 ']' 00:27:13.489 16:17:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:13.489 16:17:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:13.489 16:17:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:13.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:13.490 16:17:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:13.490 16:17:43 -- host/fio.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:27:13.490 16:17:43 -- common/autotest_common.sh@10 -- # set +x 00:27:13.490 [2024-11-20 16:17:43.730357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:27:13.490 [2024-11-20 16:17:43.730414] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:13.490 EAL: No free 2048 kB hugepages reported on node 1 00:27:13.490 [2024-11-20 16:17:43.802492] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:13.490 [2024-11-20 16:17:43.841158] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:27:13.490 [2024-11-20 16:17:43.841267] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:13.490 [2024-11-20 16:17:43.841277] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:13.490 [2024-11-20 16:17:43.841286] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:13.490 [2024-11-20 16:17:43.841333] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:13.490 [2024-11-20 16:17:43.841428] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:27:13.490 [2024-11-20 16:17:43.841528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:27:13.490 [2024-11-20 16:17:43.841536] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:13.749 16:17:44 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:13.749 16:17:44 -- common/autotest_common.sh@862 -- # return 0 00:27:13.749 16:17:44 -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:27:14.008 [2024-11-20 16:17:44.727670] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1faf0d0/0x1fb35a0) succeed. 00:27:14.008 [2024-11-20 16:17:44.736848] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1fb0670/0x1ff4c40) succeed. 00:27:14.267 16:17:44 -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:27:14.267 16:17:44 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:14.267 16:17:44 -- common/autotest_common.sh@10 -- # set +x 00:27:14.267 16:17:44 -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:27:14.526 Malloc1 00:27:14.526 16:17:45 -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:14.784 16:17:45 -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:14.784 16:17:45 -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:27:15.043 [2024-11-20 16:17:45.688245] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:27:15.043 16:17:45 -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:27:15.303 16:17:45 -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:27:15.303 16:17:45 -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:15.303 16:17:45 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:15.303 16:17:45 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:15.303 16:17:45 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:15.303 16:17:45 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:15.303 16:17:45 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.303 16:17:45 -- common/autotest_common.sh@1330 -- # shift 00:27:15.303 16:17:45 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:15.303 16:17:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.303 16:17:45 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.303 16:17:45 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:15.303 16:17:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:15.303 16:17:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:15.303 16:17:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:15.303 16:17:45 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:15.303 16:17:45 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:15.303 16:17:45 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:15.303 16:17:45 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:15.304 16:17:45 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:15.304 16:17:45 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:15.304 16:17:45 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:15.304 16:17:45 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:15.563 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:15.563 fio-3.35 00:27:15.563 Starting 1 thread 00:27:15.563 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.117 00:27:18.117 test: (groupid=0, jobs=1): err= 0: pid=1479014: Wed Nov 20 16:17:48 2024 00:27:18.117 read: IOPS=19.3k, BW=75.4MiB/s (79.0MB/s)(151MiB/2004msec) 00:27:18.117 slat (nsec): min=1329, max=22890, avg=1461.71, stdev=404.66 00:27:18.117 clat (usec): min=1555, max=6001, avg=3297.76, stdev=72.60 00:27:18.117 lat (usec): min=1571, max=6002, avg=3299.22, stdev=72.54 00:27:18.117 clat percentiles (usec): 00:27:18.117 | 1.00th=[ 3261], 5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3294], 00:27:18.117 | 30.00th=[ 3294], 40.00th=[ 3294], 50.00th=[ 3294], 60.00th=[ 3294], 00:27:18.117 | 70.00th=[ 3294], 80.00th=[ 3294], 90.00th=[ 3326], 95.00th=[ 3326], 00:27:18.117 | 99.00th=[ 3326], 99.50th=[ 3326], 99.90th=[ 4359], 99.95th=[ 5145], 00:27:18.117 | 99.99th=[ 5604] 00:27:18.117 bw ( KiB/s): min=75640, max=77904, per=100.00%, avg=77202.00, stdev=1052.63, samples=4 00:27:18.117 iops : min=18910, max=19476, avg=19300.50, stdev=263.16, samples=4 00:27:18.117 write: IOPS=19.3k, BW=75.2MiB/s (78.9MB/s)(151MiB/2004msec); 0 zone resets 00:27:18.117 slat (nsec): min=1363, max=19277, avg=1558.91, stdev=414.24 00:27:18.117 clat (usec): min=2287, max=6317, avg=3295.75, stdev=71.00 00:27:18.117 lat (usec): min=2296, max=6319, avg=3297.31, stdev=70.93 00:27:18.117 clat percentiles (usec): 00:27:18.117 | 1.00th=[ 3261], 5.00th=[ 3261], 10.00th=[ 3294], 20.00th=[ 3294], 00:27:18.117 | 30.00th=[ 3294], 40.00th=[ 3294], 50.00th=[ 3294], 60.00th=[ 3294], 00:27:18.117 | 70.00th=[ 3294], 80.00th=[ 3294], 90.00th=[ 3326], 95.00th=[ 3326], 00:27:18.117 | 99.00th=[ 3326], 99.50th=[ 3326], 99.90th=[ 3884], 99.95th=[ 5145], 00:27:18.117 | 99.99th=[ 5997] 00:27:18.117 bw ( KiB/s): min=75528, max=77824, per=100.00%, avg=77090.00, stdev=1076.75, samples=4 00:27:18.117 iops : min=18882, max=19456, avg=19272.50, stdev=269.19, samples=4 00:27:18.117 lat (msec) : 2=0.01%, 4=99.90%, 10=0.10% 00:27:18.117 cpu : usr=99.40%, sys=0.20%, ctx=17, majf=0, minf=2 00:27:18.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:18.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:18.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:18.117 issued rwts: total=38672,38601,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:18.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:18.117 00:27:18.117 Run status group 0 (all jobs): 00:27:18.117 READ: bw=75.4MiB/s (79.0MB/s), 75.4MiB/s-75.4MiB/s (79.0MB/s-79.0MB/s), io=151MiB (158MB), run=2004-2004msec 00:27:18.117 WRITE: bw=75.2MiB/s (78.9MB/s), 75.2MiB/s-75.2MiB/s (78.9MB/s-78.9MB/s), io=151MiB (158MB), run=2004-2004msec 00:27:18.117 16:17:48 -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:18.117 16:17:48 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:18.117 16:17:48 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:18.117 16:17:48 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:18.117 16:17:48 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:18.117 16:17:48 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.117 16:17:48 -- common/autotest_common.sh@1330 -- # shift 00:27:18.117 16:17:48 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:18.117 16:17:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.117 16:17:48 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.117 16:17:48 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:18.117 16:17:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:18.117 16:17:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:18.117 16:17:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:18.117 16:17:48 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:18.117 16:17:48 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:18.117 16:17:48 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:18.117 16:17:48 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:18.117 16:17:48 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:18.117 16:17:48 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:18.117 16:17:48 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:18.117 16:17:48 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' 00:27:18.376 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:27:18.376 fio-3.35 00:27:18.376 Starting 1 thread 00:27:18.376 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.910 00:27:20.910 test: (groupid=0, jobs=1): err= 0: pid=1479673: Wed Nov 20 16:17:51 2024 00:27:20.910 read: IOPS=15.2k, BW=238MiB/s (250MB/s)(468MiB/1964msec) 00:27:20.910 slat (nsec): min=2220, max=40081, avg=2576.77, stdev=984.57 00:27:20.910 clat (usec): min=448, max=7624, avg=1552.71, stdev=1218.53 00:27:20.910 lat (usec): min=450, max=7643, avg=1555.29, stdev=1218.85 00:27:20.910 clat percentiles (usec): 00:27:20.910 | 1.00th=[ 652], 5.00th=[ 742], 10.00th=[ 799], 20.00th=[ 873], 00:27:20.910 | 30.00th=[ 947], 40.00th=[ 1029], 50.00th=[ 1139], 60.00th=[ 1254], 00:27:20.910 | 70.00th=[ 1369], 80.00th=[ 1565], 90.00th=[ 3720], 95.00th=[ 4621], 00:27:20.910 | 99.00th=[ 5997], 99.50th=[ 6521], 99.90th=[ 6980], 99.95th=[ 7046], 00:27:20.910 | 99.99th=[ 7570] 00:27:20.910 bw ( KiB/s): min=107936, max=123392, per=48.43%, avg=118160.00, stdev=7264.14, samples=4 00:27:20.910 iops : min= 6746, max= 7712, avg=7385.00, stdev=454.01, samples=4 00:27:20.910 write: IOPS=8618, BW=135MiB/s (141MB/s)(240MiB/1779msec); 0 zone resets 00:27:20.910 slat (usec): min=26, max=118, avg=28.85, stdev= 5.04 00:27:20.910 clat (usec): min=3936, max=18310, avg=11843.22, stdev=1655.72 00:27:20.910 lat (usec): min=3965, max=18339, avg=11872.07, stdev=1655.15 00:27:20.910 clat percentiles (usec): 00:27:20.910 | 1.00th=[ 6587], 5.00th=[ 9241], 10.00th=[10028], 20.00th=[10683], 00:27:20.910 | 30.00th=[11076], 40.00th=[11469], 50.00th=[11863], 60.00th=[12256], 00:27:20.910 | 70.00th=[12649], 80.00th=[13042], 90.00th=[13829], 95.00th=[14484], 00:27:20.910 | 99.00th=[15795], 99.50th=[16450], 99.90th=[17433], 99.95th=[17695], 00:27:20.910 | 99.99th=[18220] 00:27:20.910 bw ( KiB/s): min=111872, max=127808, per=88.32%, avg=121800.00, stdev=7617.72, samples=4 00:27:20.910 iops : min= 6992, max= 7988, avg=7612.50, stdev=476.11, samples=4 00:27:20.910 lat (usec) : 500=0.02%, 750=3.85%, 1000=20.37% 00:27:20.910 lat (msec) : 2=33.28%, 4=2.21%, 10=9.79%, 20=30.48% 00:27:20.910 cpu : usr=95.91%, sys=2.15%, ctx=227, majf=0, minf=1 00:27:20.910 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:27:20.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:20.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:20.910 issued rwts: total=29946,15333,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:20.910 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:20.910 00:27:20.910 Run status group 0 (all jobs): 00:27:20.910 READ: bw=238MiB/s (250MB/s), 238MiB/s-238MiB/s (250MB/s-250MB/s), io=468MiB (491MB), run=1964-1964msec 00:27:20.910 WRITE: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=240MiB (251MB), run=1779-1779msec 00:27:20.910 16:17:51 -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:20.910 16:17:51 -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:27:20.910 16:17:51 -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:27:20.910 16:17:51 -- host/fio.sh@51 -- # get_nvme_bdfs 00:27:20.910 16:17:51 -- common/autotest_common.sh@1508 -- # bdfs=() 00:27:20.910 16:17:51 -- common/autotest_common.sh@1508 -- # local bdfs 00:27:20.910 16:17:51 -- common/autotest_common.sh@1509 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:20.910 16:17:51 -- common/autotest_common.sh@1509 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:20.910 16:17:51 -- common/autotest_common.sh@1509 -- # jq -r '.config[].params.traddr' 00:27:20.910 16:17:51 -- common/autotest_common.sh@1510 -- # (( 1 == 0 )) 00:27:20.910 16:17:51 -- common/autotest_common.sh@1514 -- # printf '%s\n' 0000:d8:00.0 00:27:20.910 16:17:51 -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:d8:00.0 -i 192.168.100.8 00:27:24.201 Nvme0n1 00:27:24.202 16:17:54 -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:27:29.478 16:18:00 -- host/fio.sh@53 -- # ls_guid=c93329c1-c5b7-4153-b6b1-be163ecf6687 00:27:29.478 16:18:00 -- host/fio.sh@54 -- # get_lvs_free_mb c93329c1-c5b7-4153-b6b1-be163ecf6687 00:27:29.478 16:18:00 -- common/autotest_common.sh@1353 -- # local lvs_uuid=c93329c1-c5b7-4153-b6b1-be163ecf6687 00:27:29.478 16:18:00 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:29.478 16:18:00 -- common/autotest_common.sh@1355 -- # local fc 00:27:29.478 16:18:00 -- common/autotest_common.sh@1356 -- # local cs 00:27:29.478 16:18:00 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:29.738 16:18:00 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:29.738 { 00:27:29.738 "uuid": "c93329c1-c5b7-4153-b6b1-be163ecf6687", 00:27:29.738 "name": "lvs_0", 00:27:29.738 "base_bdev": "Nvme0n1", 00:27:29.738 "total_data_clusters": 1862, 00:27:29.738 "free_clusters": 1862, 00:27:29.738 "block_size": 512, 00:27:29.738 "cluster_size": 1073741824 00:27:29.738 } 00:27:29.738 ]' 00:27:29.738 16:18:00 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="c93329c1-c5b7-4153-b6b1-be163ecf6687") .free_clusters' 00:27:29.738 16:18:00 -- common/autotest_common.sh@1358 -- # fc=1862 00:27:29.738 16:18:00 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="c93329c1-c5b7-4153-b6b1-be163ecf6687") .cluster_size' 00:27:29.738 16:18:00 -- common/autotest_common.sh@1359 -- # cs=1073741824 00:27:29.738 16:18:00 -- common/autotest_common.sh@1362 -- # free_mb=1906688 00:27:29.738 16:18:00 -- common/autotest_common.sh@1363 -- # echo 1906688 00:27:29.738 1906688 00:27:29.738 16:18:00 -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 1906688 00:27:30.307 ade9ee82-8dc5-43ca-90cf-b16a215db5f2 00:27:30.307 16:18:01 -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:27:30.567 16:18:01 -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:27:30.826 16:18:01 -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t rdma -a 192.168.100.8 -s 4420 00:27:30.827 16:18:01 -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:30.827 16:18:01 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:30.827 16:18:01 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:30.827 16:18:01 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:30.827 16:18:01 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:30.827 16:18:01 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.827 16:18:01 -- common/autotest_common.sh@1330 -- # shift 00:27:30.827 16:18:01 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:30.827 16:18:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.827 16:18:01 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.827 16:18:01 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:30.827 16:18:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:30.827 16:18:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:30.827 16:18:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:30.827 16:18:01 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:30.827 16:18:01 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:30.827 16:18:01 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:30.827 16:18:01 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:31.118 16:18:01 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:31.118 16:18:01 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:31.118 16:18:01 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:31.118 16:18:01 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:31.376 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:31.376 fio-3.35 00:27:31.376 Starting 1 thread 00:27:31.376 EAL: No free 2048 kB hugepages reported on node 1 00:27:34.033 00:27:34.033 test: (groupid=0, jobs=1): err= 0: pid=1482116: Wed Nov 20 16:18:04 2024 00:27:34.033 read: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(83.0MiB/2005msec) 00:27:34.033 slat (nsec): min=1335, max=17939, avg=1515.12, stdev=486.89 00:27:34.033 clat (usec): min=170, max=332575, avg=5987.81, stdev=18064.16 00:27:34.033 lat (usec): min=171, max=332579, avg=5989.33, stdev=18064.19 00:27:34.033 clat percentiles (msec): 00:27:34.033 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 5], 00:27:34.033 | 30.00th=[ 5], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:34.033 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:34.033 | 99.00th=[ 6], 99.50th=[ 6], 99.90th=[ 334], 99.95th=[ 334], 00:27:34.033 | 99.99th=[ 334] 00:27:34.033 bw ( KiB/s): min=15600, max=51520, per=99.93%, avg=42340.00, stdev=17829.05, samples=4 00:27:34.033 iops : min= 3900, max=12880, avg=10585.00, stdev=4457.26, samples=4 00:27:34.033 write: IOPS=10.6k, BW=41.4MiB/s (43.4MB/s)(82.9MiB/2005msec); 0 zone resets 00:27:34.033 slat (nsec): min=1380, max=17502, avg=1609.67, stdev=491.13 00:27:34.033 clat (usec): min=140, max=332924, avg=5965.43, stdev=17572.66 00:27:34.033 lat (usec): min=142, max=332927, avg=5967.04, stdev=17572.71 00:27:34.033 clat percentiles (msec): 00:27:34.033 | 1.00th=[ 5], 5.00th=[ 5], 10.00th=[ 5], 20.00th=[ 6], 00:27:34.033 | 30.00th=[ 6], 40.00th=[ 6], 50.00th=[ 6], 60.00th=[ 6], 00:27:34.033 | 70.00th=[ 6], 80.00th=[ 6], 90.00th=[ 6], 95.00th=[ 6], 00:27:34.033 | 99.00th=[ 6], 99.50th=[ 6], 99.90th=[ 334], 99.95th=[ 334], 00:27:34.033 | 99.99th=[ 334] 00:27:34.033 bw ( KiB/s): min=16392, max=51248, per=99.97%, avg=42330.00, stdev=17293.10, samples=4 00:27:34.033 iops : min= 4098, max=12812, avg=10582.50, stdev=4323.28, samples=4 00:27:34.033 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.02% 00:27:34.033 lat (msec) : 2=0.04%, 4=0.30%, 10=99.31%, 500=0.30% 00:27:34.033 cpu : usr=99.40%, sys=0.20%, ctx=21, majf=0, minf=2 00:27:34.033 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:34.033 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:34.033 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:34.033 issued rwts: total=21237,21225,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:34.033 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:34.033 00:27:34.033 Run status group 0 (all jobs): 00:27:34.033 READ: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=83.0MiB (87.0MB), run=2005-2005msec 00:27:34.033 WRITE: bw=41.4MiB/s (43.4MB/s), 41.4MiB/s-41.4MiB/s (43.4MB/s-43.4MB/s), io=82.9MiB (86.9MB), run=2005-2005msec 00:27:34.033 16:18:04 -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:34.033 16:18:04 -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:27:34.970 16:18:05 -- host/fio.sh@64 -- # ls_nested_guid=7b3b38f1-1557-4a3b-ac1d-539c1ea52dba 00:27:34.970 16:18:05 -- host/fio.sh@65 -- # get_lvs_free_mb 7b3b38f1-1557-4a3b-ac1d-539c1ea52dba 00:27:34.970 16:18:05 -- common/autotest_common.sh@1353 -- # local lvs_uuid=7b3b38f1-1557-4a3b-ac1d-539c1ea52dba 00:27:34.970 16:18:05 -- common/autotest_common.sh@1354 -- # local lvs_info 00:27:34.970 16:18:05 -- common/autotest_common.sh@1355 -- # local fc 00:27:34.970 16:18:05 -- common/autotest_common.sh@1356 -- # local cs 00:27:34.970 16:18:05 -- common/autotest_common.sh@1357 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:27:35.230 16:18:05 -- common/autotest_common.sh@1357 -- # lvs_info='[ 00:27:35.230 { 00:27:35.230 "uuid": "c93329c1-c5b7-4153-b6b1-be163ecf6687", 00:27:35.230 "name": "lvs_0", 00:27:35.230 "base_bdev": "Nvme0n1", 00:27:35.230 "total_data_clusters": 1862, 00:27:35.230 "free_clusters": 0, 00:27:35.230 "block_size": 512, 00:27:35.230 "cluster_size": 1073741824 00:27:35.230 }, 00:27:35.230 { 00:27:35.230 "uuid": "7b3b38f1-1557-4a3b-ac1d-539c1ea52dba", 00:27:35.230 "name": "lvs_n_0", 00:27:35.230 "base_bdev": "ade9ee82-8dc5-43ca-90cf-b16a215db5f2", 00:27:35.230 "total_data_clusters": 476206, 00:27:35.230 "free_clusters": 476206, 00:27:35.230 "block_size": 512, 00:27:35.230 "cluster_size": 4194304 00:27:35.230 } 00:27:35.230 ]' 00:27:35.230 16:18:05 -- common/autotest_common.sh@1358 -- # jq '.[] | select(.uuid=="7b3b38f1-1557-4a3b-ac1d-539c1ea52dba") .free_clusters' 00:27:35.230 16:18:05 -- common/autotest_common.sh@1358 -- # fc=476206 00:27:35.230 16:18:05 -- common/autotest_common.sh@1359 -- # jq '.[] | select(.uuid=="7b3b38f1-1557-4a3b-ac1d-539c1ea52dba") .cluster_size' 00:27:35.230 16:18:06 -- common/autotest_common.sh@1359 -- # cs=4194304 00:27:35.230 16:18:06 -- common/autotest_common.sh@1362 -- # free_mb=1904824 00:27:35.230 16:18:06 -- common/autotest_common.sh@1363 -- # echo 1904824 00:27:35.230 1904824 00:27:35.230 16:18:06 -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 1904824 00:27:36.166 ad923bff-56de-43ef-a53b-d9eeaa0c7705 00:27:36.166 16:18:06 -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:27:36.426 16:18:07 -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:27:36.685 16:18:07 -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t rdma -a 192.168.100.8 -s 4420 00:27:36.685 16:18:07 -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:36.685 16:18:07 -- common/autotest_common.sh@1349 -- # fio_plugin /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:36.685 16:18:07 -- common/autotest_common.sh@1326 -- # local fio_dir=/usr/src/fio 00:27:36.685 16:18:07 -- common/autotest_common.sh@1328 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:27:36.685 16:18:07 -- common/autotest_common.sh@1328 -- # local sanitizers 00:27:36.685 16:18:07 -- common/autotest_common.sh@1329 -- # local plugin=/var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:36.685 16:18:07 -- common/autotest_common.sh@1330 -- # shift 00:27:36.685 16:18:07 -- common/autotest_common.sh@1332 -- # local asan_lib= 00:27:36.685 16:18:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.685 16:18:07 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:36.685 16:18:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:36.685 16:18:07 -- common/autotest_common.sh@1334 -- # grep libasan 00:27:36.966 16:18:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:36.966 16:18:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:36.966 16:18:07 -- common/autotest_common.sh@1333 -- # for sanitizer in "${sanitizers[@]}" 00:27:36.966 16:18:07 -- common/autotest_common.sh@1334 -- # ldd /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme 00:27:36.966 16:18:07 -- common/autotest_common.sh@1334 -- # grep libclang_rt.asan 00:27:36.967 16:18:07 -- common/autotest_common.sh@1334 -- # awk '{print $3}' 00:27:36.967 16:18:07 -- common/autotest_common.sh@1334 -- # asan_lib= 00:27:36.967 16:18:07 -- common/autotest_common.sh@1335 -- # [[ -n '' ]] 00:27:36.967 16:18:07 -- common/autotest_common.sh@1341 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/fio/spdk_nvme' 00:27:36.967 16:18:07 -- common/autotest_common.sh@1341 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=rdma adrfam=IPv4 traddr=192.168.100.8 trsvcid=4420 ns=1' --bs=4096 00:27:37.230 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:27:37.230 fio-3.35 00:27:37.230 Starting 1 thread 00:27:37.230 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.756 00:27:39.756 test: (groupid=0, jobs=1): err= 0: pid=1483489: Wed Nov 20 16:18:10 2024 00:27:39.756 read: IOPS=10.7k, BW=41.9MiB/s (43.9MB/s)(84.0MiB/2005msec) 00:27:39.756 slat (nsec): min=1356, max=17244, avg=1489.90, stdev=221.28 00:27:39.756 clat (usec): min=3022, max=10155, avg=5903.81, stdev=188.51 00:27:39.756 lat (usec): min=3025, max=10156, avg=5905.30, stdev=188.48 00:27:39.756 clat percentiles (usec): 00:27:39.756 | 1.00th=[ 5276], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5866], 00:27:39.756 | 30.00th=[ 5866], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 5932], 00:27:39.756 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5932], 95.00th=[ 5932], 00:27:39.756 | 99.00th=[ 6521], 99.50th=[ 6587], 99.90th=[ 8586], 99.95th=[ 9372], 00:27:39.756 | 99.99th=[10159] 00:27:39.756 bw ( KiB/s): min=41408, max=43536, per=99.95%, avg=42880.00, stdev=1002.76, samples=4 00:27:39.756 iops : min=10352, max=10884, avg=10720.00, stdev=250.69, samples=4 00:27:39.756 write: IOPS=10.7k, BW=41.8MiB/s (43.9MB/s)(83.9MiB/2005msec); 0 zone resets 00:27:39.756 slat (nsec): min=1395, max=17492, avg=1638.77, stdev=304.27 00:27:39.756 clat (usec): min=3025, max=10838, avg=5923.43, stdev=190.75 00:27:39.756 lat (usec): min=3029, max=10839, avg=5925.07, stdev=190.72 00:27:39.756 clat percentiles (usec): 00:27:39.756 | 1.00th=[ 5276], 5.00th=[ 5866], 10.00th=[ 5866], 20.00th=[ 5866], 00:27:39.756 | 30.00th=[ 5932], 40.00th=[ 5932], 50.00th=[ 5932], 60.00th=[ 5932], 00:27:39.756 | 70.00th=[ 5932], 80.00th=[ 5932], 90.00th=[ 5997], 95.00th=[ 5997], 00:27:39.756 | 99.00th=[ 6521], 99.50th=[ 6587], 99.90th=[ 8586], 99.95th=[10028], 00:27:39.756 | 99.99th=[10159] 00:27:39.756 bw ( KiB/s): min=41792, max=43304, per=99.96%, avg=42814.00, stdev=693.20, samples=4 00:27:39.756 iops : min=10448, max=10826, avg=10703.50, stdev=173.30, samples=4 00:27:39.756 lat (msec) : 4=0.04%, 10=99.92%, 20=0.04% 00:27:39.756 cpu : usr=99.55%, sys=0.10%, ctx=15, majf=0, minf=2 00:27:39.756 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:27:39.756 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:39.756 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:27:39.756 issued rwts: total=21504,21470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:39.756 latency : target=0, window=0, percentile=100.00%, depth=128 00:27:39.756 00:27:39.756 Run status group 0 (all jobs): 00:27:39.756 READ: bw=41.9MiB/s (43.9MB/s), 41.9MiB/s-41.9MiB/s (43.9MB/s-43.9MB/s), io=84.0MiB (88.1MB), run=2005-2005msec 00:27:39.756 WRITE: bw=41.8MiB/s (43.9MB/s), 41.8MiB/s-41.8MiB/s (43.9MB/s-43.9MB/s), io=83.9MiB (87.9MB), run=2005-2005msec 00:27:39.756 16:18:10 -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:27:39.756 16:18:10 -- host/fio.sh@74 -- # sync 00:27:39.756 16:18:10 -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:27:47.863 16:18:17 -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:27:47.863 16:18:17 -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:27:53.124 16:18:23 -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:27:53.124 16:18:23 -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:27:56.401 16:18:26 -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:56.401 16:18:26 -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:27:56.401 16:18:26 -- host/fio.sh@86 -- # nvmftestfini 00:27:56.401 16:18:26 -- nvmf/common.sh@476 -- # nvmfcleanup 00:27:56.401 16:18:26 -- nvmf/common.sh@116 -- # sync 00:27:56.401 16:18:26 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:27:56.401 16:18:26 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:27:56.401 16:18:26 -- nvmf/common.sh@119 -- # set +e 00:27:56.401 16:18:26 -- nvmf/common.sh@120 -- # for i in {1..20} 00:27:56.401 16:18:26 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:27:56.401 rmmod nvme_rdma 00:27:56.401 rmmod nvme_fabrics 00:27:56.401 16:18:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:27:56.401 16:18:26 -- nvmf/common.sh@123 -- # set -e 00:27:56.401 16:18:26 -- nvmf/common.sh@124 -- # return 0 00:27:56.401 16:18:26 -- nvmf/common.sh@477 -- # '[' -n 1478447 ']' 00:27:56.401 16:18:26 -- nvmf/common.sh@478 -- # killprocess 1478447 00:27:56.401 16:18:26 -- common/autotest_common.sh@936 -- # '[' -z 1478447 ']' 00:27:56.401 16:18:26 -- common/autotest_common.sh@940 -- # kill -0 1478447 00:27:56.401 16:18:26 -- common/autotest_common.sh@941 -- # uname 00:27:56.401 16:18:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:56.401 16:18:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1478447 00:27:56.401 16:18:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:56.401 16:18:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:56.401 16:18:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1478447' 00:27:56.401 killing process with pid 1478447 00:27:56.401 16:18:26 -- common/autotest_common.sh@955 -- # kill 1478447 00:27:56.401 16:18:26 -- common/autotest_common.sh@960 -- # wait 1478447 00:27:56.401 16:18:27 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:27:56.401 16:18:27 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:27:56.401 00:27:56.401 real 0m49.669s 00:27:56.401 user 3m39.507s 00:27:56.401 sys 0m7.317s 00:27:56.401 16:18:27 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:27:56.401 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:27:56.401 ************************************ 00:27:56.401 END TEST nvmf_fio_host 00:27:56.401 ************************************ 00:27:56.401 16:18:27 -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:56.401 16:18:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:56.401 16:18:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:56.401 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:27:56.401 ************************************ 00:27:56.401 START TEST nvmf_failover 00:27:56.401 ************************************ 00:27:56.401 16:18:27 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=rdma 00:27:56.401 * Looking for test storage... 00:27:56.401 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:27:56.401 16:18:27 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:27:56.401 16:18:27 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:27:56.401 16:18:27 -- common/autotest_common.sh@1690 -- # lcov --version 00:27:56.660 16:18:27 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:27:56.660 16:18:27 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:27:56.660 16:18:27 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:27:56.660 16:18:27 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:27:56.660 16:18:27 -- scripts/common.sh@335 -- # IFS=.-: 00:27:56.660 16:18:27 -- scripts/common.sh@335 -- # read -ra ver1 00:27:56.660 16:18:27 -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.660 16:18:27 -- scripts/common.sh@336 -- # read -ra ver2 00:27:56.660 16:18:27 -- scripts/common.sh@337 -- # local 'op=<' 00:27:56.660 16:18:27 -- scripts/common.sh@339 -- # ver1_l=2 00:27:56.660 16:18:27 -- scripts/common.sh@340 -- # ver2_l=1 00:27:56.660 16:18:27 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:27:56.660 16:18:27 -- scripts/common.sh@343 -- # case "$op" in 00:27:56.660 16:18:27 -- scripts/common.sh@344 -- # : 1 00:27:56.660 16:18:27 -- scripts/common.sh@363 -- # (( v = 0 )) 00:27:56.660 16:18:27 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.660 16:18:27 -- scripts/common.sh@364 -- # decimal 1 00:27:56.660 16:18:27 -- scripts/common.sh@352 -- # local d=1 00:27:56.660 16:18:27 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.660 16:18:27 -- scripts/common.sh@354 -- # echo 1 00:27:56.660 16:18:27 -- scripts/common.sh@364 -- # ver1[v]=1 00:27:56.660 16:18:27 -- scripts/common.sh@365 -- # decimal 2 00:27:56.660 16:18:27 -- scripts/common.sh@352 -- # local d=2 00:27:56.660 16:18:27 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.660 16:18:27 -- scripts/common.sh@354 -- # echo 2 00:27:56.660 16:18:27 -- scripts/common.sh@365 -- # ver2[v]=2 00:27:56.660 16:18:27 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:27:56.660 16:18:27 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:27:56.660 16:18:27 -- scripts/common.sh@367 -- # return 0 00:27:56.660 16:18:27 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.660 16:18:27 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:27:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.660 --rc genhtml_branch_coverage=1 00:27:56.660 --rc genhtml_function_coverage=1 00:27:56.660 --rc genhtml_legend=1 00:27:56.660 --rc geninfo_all_blocks=1 00:27:56.660 --rc geninfo_unexecuted_blocks=1 00:27:56.660 00:27:56.660 ' 00:27:56.660 16:18:27 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:27:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.660 --rc genhtml_branch_coverage=1 00:27:56.660 --rc genhtml_function_coverage=1 00:27:56.660 --rc genhtml_legend=1 00:27:56.660 --rc geninfo_all_blocks=1 00:27:56.660 --rc geninfo_unexecuted_blocks=1 00:27:56.660 00:27:56.660 ' 00:27:56.660 16:18:27 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:27:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.660 --rc genhtml_branch_coverage=1 00:27:56.660 --rc genhtml_function_coverage=1 00:27:56.660 --rc genhtml_legend=1 00:27:56.660 --rc geninfo_all_blocks=1 00:27:56.660 --rc geninfo_unexecuted_blocks=1 00:27:56.660 00:27:56.660 ' 00:27:56.660 16:18:27 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:27:56.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.660 --rc genhtml_branch_coverage=1 00:27:56.660 --rc genhtml_function_coverage=1 00:27:56.660 --rc genhtml_legend=1 00:27:56.660 --rc geninfo_all_blocks=1 00:27:56.660 --rc geninfo_unexecuted_blocks=1 00:27:56.660 00:27:56.660 ' 00:27:56.660 16:18:27 -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:27:56.660 16:18:27 -- nvmf/common.sh@7 -- # uname -s 00:27:56.660 16:18:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:56.660 16:18:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:56.660 16:18:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:56.660 16:18:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:56.660 16:18:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:56.660 16:18:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:56.660 16:18:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:56.660 16:18:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:56.660 16:18:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:56.660 16:18:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:56.660 16:18:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:27:56.660 16:18:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:27:56.660 16:18:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:56.660 16:18:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:56.660 16:18:27 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:56.660 16:18:27 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:27:56.660 16:18:27 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:56.660 16:18:27 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:56.660 16:18:27 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:56.660 16:18:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.660 16:18:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.660 16:18:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.660 16:18:27 -- paths/export.sh@5 -- # export PATH 00:27:56.660 16:18:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:56.660 16:18:27 -- nvmf/common.sh@46 -- # : 0 00:27:56.660 16:18:27 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:27:56.661 16:18:27 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:27:56.661 16:18:27 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:27:56.661 16:18:27 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:56.661 16:18:27 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:56.661 16:18:27 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:27:56.661 16:18:27 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:27:56.661 16:18:27 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:27:56.661 16:18:27 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:56.661 16:18:27 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:56.661 16:18:27 -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py 00:27:56.661 16:18:27 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:56.661 16:18:27 -- host/failover.sh@18 -- # nvmftestinit 00:27:56.661 16:18:27 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:27:56.661 16:18:27 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:56.661 16:18:27 -- nvmf/common.sh@436 -- # prepare_net_devs 00:27:56.661 16:18:27 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:27:56.661 16:18:27 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:27:56.661 16:18:27 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.661 16:18:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.661 16:18:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:56.661 16:18:27 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:27:56.661 16:18:27 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:27:56.661 16:18:27 -- nvmf/common.sh@284 -- # xtrace_disable 00:27:56.661 16:18:27 -- common/autotest_common.sh@10 -- # set +x 00:28:03.230 16:18:33 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:03.230 16:18:33 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:03.230 16:18:33 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:03.230 16:18:33 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:03.230 16:18:33 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:03.230 16:18:33 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:03.230 16:18:33 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:03.230 16:18:33 -- nvmf/common.sh@294 -- # net_devs=() 00:28:03.230 16:18:33 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:03.230 16:18:33 -- nvmf/common.sh@295 -- # e810=() 00:28:03.230 16:18:33 -- nvmf/common.sh@295 -- # local -ga e810 00:28:03.230 16:18:33 -- nvmf/common.sh@296 -- # x722=() 00:28:03.230 16:18:33 -- nvmf/common.sh@296 -- # local -ga x722 00:28:03.230 16:18:33 -- nvmf/common.sh@297 -- # mlx=() 00:28:03.230 16:18:33 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:03.230 16:18:33 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.230 16:18:33 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:03.230 16:18:33 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:03.230 16:18:33 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:03.230 16:18:33 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:03.230 16:18:33 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:03.230 16:18:33 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:03.231 16:18:33 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:03.231 16:18:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:03.231 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:03.231 16:18:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.231 16:18:33 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:03.231 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:03.231 16:18:33 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:03.231 16:18:33 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:03.231 16:18:33 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.231 16:18:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:03.231 16:18:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.231 16:18:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:03.231 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:03.231 16:18:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.231 16:18:33 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.231 16:18:33 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:03.231 16:18:33 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.231 16:18:33 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:03.231 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:03.231 16:18:33 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.231 16:18:33 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:03.231 16:18:33 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:03.231 16:18:33 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:03.231 16:18:33 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:03.231 16:18:33 -- nvmf/common.sh@57 -- # uname 00:28:03.231 16:18:33 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:03.231 16:18:33 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:03.231 16:18:33 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:03.231 16:18:33 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:03.231 16:18:33 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:03.231 16:18:33 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:03.231 16:18:33 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:03.231 16:18:33 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:03.231 16:18:33 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:03.231 16:18:33 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:03.231 16:18:33 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:03.231 16:18:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.231 16:18:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:03.231 16:18:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:03.231 16:18:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.231 16:18:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:03.231 16:18:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:03.231 16:18:33 -- nvmf/common.sh@104 -- # continue 2 00:28:03.231 16:18:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:03.231 16:18:33 -- nvmf/common.sh@104 -- # continue 2 00:28:03.231 16:18:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:03.231 16:18:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:03.231 16:18:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.231 16:18:33 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:03.231 16:18:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:03.231 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.231 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:03.231 altname enp217s0f0np0 00:28:03.231 altname ens818f0np0 00:28:03.231 inet 192.168.100.8/24 scope global mlx_0_0 00:28:03.231 valid_lft forever preferred_lft forever 00:28:03.231 16:18:33 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:03.231 16:18:33 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:03.231 16:18:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.231 16:18:33 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:03.231 16:18:33 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:03.231 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:03.231 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:03.231 altname enp217s0f1np1 00:28:03.231 altname ens818f1np1 00:28:03.231 inet 192.168.100.9/24 scope global mlx_0_1 00:28:03.231 valid_lft forever preferred_lft forever 00:28:03.231 16:18:33 -- nvmf/common.sh@410 -- # return 0 00:28:03.231 16:18:33 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:03.231 16:18:33 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:03.231 16:18:33 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:03.231 16:18:33 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:03.231 16:18:33 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:03.231 16:18:33 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:03.231 16:18:33 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:03.231 16:18:33 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:03.231 16:18:33 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:03.231 16:18:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:03.231 16:18:33 -- nvmf/common.sh@104 -- # continue 2 00:28:03.231 16:18:33 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:03.231 16:18:33 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:03.231 16:18:33 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:03.231 16:18:33 -- nvmf/common.sh@104 -- # continue 2 00:28:03.231 16:18:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:03.231 16:18:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:03.231 16:18:33 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.231 16:18:33 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:03.231 16:18:33 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:03.231 16:18:33 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:03.231 16:18:33 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:03.231 16:18:33 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:03.231 192.168.100.9' 00:28:03.231 16:18:33 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:03.231 192.168.100.9' 00:28:03.231 16:18:33 -- nvmf/common.sh@445 -- # head -n 1 00:28:03.231 16:18:33 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:03.231 16:18:33 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:03.231 192.168.100.9' 00:28:03.231 16:18:33 -- nvmf/common.sh@446 -- # tail -n +2 00:28:03.231 16:18:33 -- nvmf/common.sh@446 -- # head -n 1 00:28:03.231 16:18:33 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:03.231 16:18:33 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:03.231 16:18:33 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:03.231 16:18:33 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:03.231 16:18:33 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:03.231 16:18:33 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:03.231 16:18:33 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:28:03.231 16:18:33 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:03.231 16:18:33 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:03.231 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:28:03.231 16:18:33 -- nvmf/common.sh@469 -- # nvmfpid=1490083 00:28:03.231 16:18:33 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:03.231 16:18:33 -- nvmf/common.sh@470 -- # waitforlisten 1490083 00:28:03.232 16:18:33 -- common/autotest_common.sh@829 -- # '[' -z 1490083 ']' 00:28:03.232 16:18:33 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.232 16:18:33 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:03.232 16:18:33 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.232 16:18:33 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:03.232 16:18:33 -- common/autotest_common.sh@10 -- # set +x 00:28:03.232 [2024-11-20 16:18:33.548753] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:03.232 [2024-11-20 16:18:33.548804] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.232 EAL: No free 2048 kB hugepages reported on node 1 00:28:03.232 [2024-11-20 16:18:33.620688] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:03.232 [2024-11-20 16:18:33.657910] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:03.232 [2024-11-20 16:18:33.658021] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.232 [2024-11-20 16:18:33.658030] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.232 [2024-11-20 16:18:33.658040] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.232 [2024-11-20 16:18:33.658141] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.232 [2024-11-20 16:18:33.658228] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.232 [2024-11-20 16:18:33.658230] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.799 16:18:34 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:03.799 16:18:34 -- common/autotest_common.sh@862 -- # return 0 00:28:03.799 16:18:34 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:03.799 16:18:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:03.799 16:18:34 -- common/autotest_common.sh@10 -- # set +x 00:28:03.799 16:18:34 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.799 16:18:34 -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:04.056 [2024-11-20 16:18:34.607318] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1732900/0x1736db0) succeed. 00:28:04.056 [2024-11-20 16:18:34.616225] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1733e00/0x1778450) succeed. 00:28:04.056 16:18:34 -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:28:04.313 Malloc0 00:28:04.313 16:18:34 -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.569 16:18:35 -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.569 16:18:35 -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:04.825 [2024-11-20 16:18:35.477468] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:04.825 16:18:35 -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:05.150 [2024-11-20 16:18:35.653813] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:05.150 16:18:35 -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:05.150 [2024-11-20 16:18:35.834495] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:05.150 16:18:35 -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:28:05.150 16:18:35 -- host/failover.sh@31 -- # bdevperf_pid=1490461 00:28:05.150 16:18:35 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:05.150 16:18:35 -- host/failover.sh@34 -- # waitforlisten 1490461 /var/tmp/bdevperf.sock 00:28:05.150 16:18:35 -- common/autotest_common.sh@829 -- # '[' -z 1490461 ']' 00:28:05.150 16:18:35 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:05.150 16:18:35 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:05.150 16:18:35 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:05.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:05.150 16:18:35 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:05.150 16:18:35 -- common/autotest_common.sh@10 -- # set +x 00:28:06.080 16:18:36 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:06.080 16:18:36 -- common/autotest_common.sh@862 -- # return 0 00:28:06.080 16:18:36 -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:06.336 NVMe0n1 00:28:06.336 16:18:37 -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:06.593 00:28:06.593 16:18:37 -- host/failover.sh@39 -- # run_test_pid=1490731 00:28:06.593 16:18:37 -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:06.593 16:18:37 -- host/failover.sh@41 -- # sleep 1 00:28:07.526 16:18:38 -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:07.783 16:18:38 -- host/failover.sh@45 -- # sleep 3 00:28:11.060 16:18:41 -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:11.060 00:28:11.060 16:18:41 -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:11.318 16:18:41 -- host/failover.sh@50 -- # sleep 3 00:28:14.601 16:18:44 -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:14.601 [2024-11-20 16:18:45.075865] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:14.601 16:18:45 -- host/failover.sh@55 -- # sleep 1 00:28:15.536 16:18:46 -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:15.536 16:18:46 -- host/failover.sh@59 -- # wait 1490731 00:28:22.104 0 00:28:22.104 16:18:52 -- host/failover.sh@61 -- # killprocess 1490461 00:28:22.104 16:18:52 -- common/autotest_common.sh@936 -- # '[' -z 1490461 ']' 00:28:22.104 16:18:52 -- common/autotest_common.sh@940 -- # kill -0 1490461 00:28:22.104 16:18:52 -- common/autotest_common.sh@941 -- # uname 00:28:22.104 16:18:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:22.104 16:18:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1490461 00:28:22.104 16:18:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:22.104 16:18:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:22.104 16:18:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1490461' 00:28:22.104 killing process with pid 1490461 00:28:22.104 16:18:52 -- common/autotest_common.sh@955 -- # kill 1490461 00:28:22.104 16:18:52 -- common/autotest_common.sh@960 -- # wait 1490461 00:28:22.104 16:18:52 -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:22.104 [2024-11-20 16:18:35.904142] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:22.104 [2024-11-20 16:18:35.904201] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1490461 ] 00:28:22.104 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.104 [2024-11-20 16:18:35.976388] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.104 [2024-11-20 16:18:36.013466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.104 Running I/O for 15 seconds... 00:28:22.104 [2024-11-20 16:18:39.437794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.104 [2024-11-20 16:18:39.437839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54102 cdw0:0 sqhd:c046 p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.437851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.104 [2024-11-20 16:18:39.437861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54102 cdw0:0 sqhd:c046 p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.437871] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.104 [2024-11-20 16:18:39.437881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54102 cdw0:0 sqhd:c046 p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.437890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:22.104 [2024-11-20 16:18:39.437900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54102 cdw0:0 sqhd:c046 p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:22.104 [2024-11-20 16:18:39.440461] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.104 [2024-11-20 16:18:39.440478] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:22.104 [2024-11-20 16:18:39.440487] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:22.104 [2024-11-20 16:18:39.440504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:92288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181400 00:28:22.104 [2024-11-20 16:18:39.440514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181400 00:28:22.104 [2024-11-20 16:18:39.440565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.104 [2024-11-20 16:18:39.440592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:93000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x183f00 00:28:22.104 [2024-11-20 16:18:39.440636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:93008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.104 [2024-11-20 16:18:39.440662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:93016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.104 [2024-11-20 16:18:39.440709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:93024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f3680 len:0x1000 key:0x183f00 00:28:22.104 [2024-11-20 16:18:39.440735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:92320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181400 00:28:22.104 [2024-11-20 16:18:39.440777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:93032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.104 [2024-11-20 16:18:39.440817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x181400 00:28:22.104 [2024-11-20 16:18:39.440844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:92336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181400 00:28:22.104 [2024-11-20 16:18:39.440870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:93040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.104 [2024-11-20 16:18:39.440911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.104 [2024-11-20 16:18:39.440927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:93048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.105 [2024-11-20 16:18:39.440936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.440966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:93056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.105 [2024-11-20 16:18:39.440977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.440993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:92360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:93064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ea200 len:0x1000 key:0x183f00 00:28:22.105 [2024-11-20 16:18:39.441029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:93072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.105 [2024-11-20 16:18:39.441071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:92376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:93080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x183f00 00:28:22.105 [2024-11-20 16:18:39.441139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:92392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:92400 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:93088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e3f00 len:0x1000 key:0x183f00 00:28:22.105 [2024-11-20 16:18:39.441246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:92416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:92424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:93096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.105 [2024-11-20 16:18:39.441381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:92448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:92456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:93104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x183f00 00:28:22.105 [2024-11-20 16:18:39.441488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:93112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.105 [2024-11-20 16:18:39.441514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:92480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x183f00 00:28:22.105 [2024-11-20 16:18:39.441602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:93128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.105 [2024-11-20 16:18:39.441642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:93136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.105 [2024-11-20 16:18:39.441683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:92496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:93144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x183f00 00:28:22.105 [2024-11-20 16:18:39.441750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:92512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:93152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x183f00 00:28:22.105 [2024-11-20 16:18:39.441831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:92528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181400 00:28:22.105 [2024-11-20 16:18:39.441872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:93160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.105 [2024-11-20 16:18:39.441912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.105 [2024-11-20 16:18:39.441929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:93168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cf500 len:0x1000 key:0x183f00 00:28:22.105 [2024-11-20 16:18:39.441938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.441954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.441964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.441980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:93176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.441990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:92544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.442015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:93184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:93192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:93200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:92560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.442133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:93208 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:92576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.442215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:93216 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:93224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:93232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:93240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:93248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c0e00 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:92616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007538000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.442400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:93256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:93264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:93272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:93280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:93288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:92648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.442592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:92656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.442634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:93296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:92664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753c000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.442702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:93304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b5880 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:93312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:93320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:93328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.442874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:93336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:93344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:93352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.442965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.442985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:93360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.106 [2024-11-20 16:18:39.442995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.443011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:93368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x183f00 00:28:22.106 [2024-11-20 16:18:39.443021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.443051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:92704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.443061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.106 [2024-11-20 16:18:39.443077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:92712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181400 00:28:22.106 [2024-11-20 16:18:39.443087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:93376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:92728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181400 00:28:22.107 [2024-11-20 16:18:39.443155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:93384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:93392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.443223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:93400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:93408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a4000 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.443289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:93416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:93424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.443370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181400 00:28:22.107 [2024-11-20 16:18:39.443396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:93432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:92768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181400 00:28:22.107 [2024-11-20 16:18:39.443463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:93440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:93448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.443547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:93456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389bc00 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.443574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:93464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.443615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:92784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181400 00:28:22.107 [2024-11-20 16:18:39.443655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:92792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181400 00:28:22.107 [2024-11-20 16:18:39.443696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:93472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.443723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:93480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:93488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:93496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:93504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.443871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:93512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013892780 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.443914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:93520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.443954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.443970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181400 00:28:22.107 [2024-11-20 16:18:39.443980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:93528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.444022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:93536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388e580 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.444048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:93544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.444089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:93552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.444114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:93560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.444140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:92856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181400 00:28:22.107 [2024-11-20 16:18:39.444181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:93568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.444222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:93576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.444247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:93584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.444273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:93592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.107 [2024-11-20 16:18:39.444313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:93600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x183f00 00:28:22.107 [2024-11-20 16:18:39.444354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.107 [2024-11-20 16:18:39.444384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:92896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:39.444395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:39.444435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:93608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x183f00 00:28:22.108 [2024-11-20 16:18:39.444461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:92912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:39.444487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:92920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:39.444514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:93616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.108 [2024-11-20 16:18:39.444559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:93624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.108 [2024-11-20 16:18:39.444587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:93632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.108 [2024-11-20 16:18:39.444628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:93640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x183f00 00:28:22.108 [2024-11-20 16:18:39.444669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:93648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x183f00 00:28:22.108 [2024-11-20 16:18:39.444695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007560000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:39.444722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:93656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x183f00 00:28:22.108 [2024-11-20 16:18:39.444747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.444764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:93664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.108 [2024-11-20 16:18:39.444774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54102 cdw0:1551d000 sqhd:fe9e p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.459235] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.108 [2024-11-20 16:18:39.459255] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.108 [2024-11-20 16:18:39.459265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:93672 len:8 PRP1 0x0 PRP2 0x0 00:28:22.108 [2024-11-20 16:18:39.459275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:39.459340] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:22.108 [2024-11-20 16:18:39.459351] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.108 [2024-11-20 16:18:39.459378] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:22.108 [2024-11-20 16:18:39.461119] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.108 [2024-11-20 16:18:39.493659] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:22.108 [2024-11-20 16:18:42.878920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:67568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:42.878964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.878982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:68272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x184000 00:28:22.108 [2024-11-20 16:18:42.878998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013881f80 len:0x1000 key:0x184000 00:28:22.108 [2024-11-20 16:18:42.879021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:68288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x184000 00:28:22.108 [2024-11-20 16:18:42.879042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:67584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:42.879062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.108 [2024-11-20 16:18:42.879082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.108 [2024-11-20 16:18:42.879102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:67600 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:42.879122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:68312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x184000 00:28:22.108 [2024-11-20 16:18:42.879143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:67608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:42.879164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:68320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.108 [2024-11-20 16:18:42.879183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:68328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x184000 00:28:22.108 [2024-11-20 16:18:42.879203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:68336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.108 [2024-11-20 16:18:42.879224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:68344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x184000 00:28:22.108 [2024-11-20 16:18:42.879246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:67640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:42.879267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:67648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:42.879288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:67656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181400 00:28:22.108 [2024-11-20 16:18:42.879309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:68352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ee400 len:0x1000 key:0x184000 00:28:22.108 [2024-11-20 16:18:42.879331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:68360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x184000 00:28:22.108 [2024-11-20 16:18:42.879352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.108 [2024-11-20 16:18:42.879363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:67680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:68368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138eb280 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.109 [2024-11-20 16:18:42.879411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:67696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:68384 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e8100 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:68392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:67712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.109 [2024-11-20 16:18:42.879512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:67728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:68408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387fe80 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:68416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:68424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.109 [2024-11-20 16:18:42.879594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:68432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.109 [2024-11-20 16:18:42.879613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:68440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:68448 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:68456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:67768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:68464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.109 [2024-11-20 16:18:42.879713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:68472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:67784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007532000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.109 [2024-11-20 16:18:42.879772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:68488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.109 [2024-11-20 16:18:42.879792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:68496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138af580 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:68504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:67808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:67816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:67824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.109 [2024-11-20 16:18:42.879910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:68520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x184000 00:28:22.109 [2024-11-20 16:18:42.879931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:67840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181400 00:28:22.109 [2024-11-20 16:18:42.879951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.109 [2024-11-20 16:18:42.879962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:68528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a7180 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.879971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.879982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:67856 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181400 00:28:22.110 [2024-11-20 16:18:42.879991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:68536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a5080 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:68552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:68560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:67888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181400 00:28:22.110 [2024-11-20 16:18:42.880089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:67896 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752e000 len:0x1000 key:0x181400 00:28:22.110 [2024-11-20 16:18:42.880128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:68584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:68592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:68600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:68608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:67920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181400 00:28:22.110 [2024-11-20 16:18:42.880248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:68616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:68624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d8980 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:68632 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d7900 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:68640 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:67944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181400 00:28:22.110 [2024-11-20 16:18:42.880347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:67952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181400 00:28:22.110 [2024-11-20 16:18:42.880367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:68648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d3700 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:68656 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a0e80 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:68672 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:68688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:68704 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:68712 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013899b00 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:68728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:67992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007528000 len:0x1000 key:0x181400 00:28:22.110 [2024-11-20 16:18:42.880614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:68000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181400 00:28:22.110 [2024-11-20 16:18:42.880636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.110 [2024-11-20 16:18:42.880656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:68744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013893800 len:0x1000 key:0x184000 00:28:22.110 [2024-11-20 16:18:42.880675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.110 [2024-11-20 16:18:42.880687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:68016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181400 00:28:22.110 [2024-11-20 16:18:42.880696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.880715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:68024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.880734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.880754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.880773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:68048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.880792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.880812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:68784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.880832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:68056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.880852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.880872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:68072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.880892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:68080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007506000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.880911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:68800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d0580 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.880931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.880951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:68096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.880970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.880981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:68104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.880991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:68112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007552000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.881010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:68816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.881030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.881049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:68832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387bc80 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.881069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:68840 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.881089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.881109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.881128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.881147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:68168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.881167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:68872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f9980 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.881187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.881206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:68888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f7880 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.881226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.881245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:68904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.881264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:68912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f4700 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.881284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:68920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ca280 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.881305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:68200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.881325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.881344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.881364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:68224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181400 00:28:22.111 [2024-11-20 16:18:42.881383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:68944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x184000 00:28:22.111 [2024-11-20 16:18:42.881402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.111 [2024-11-20 16:18:42.881413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.111 [2024-11-20 16:18:42.881422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:42.881432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:68232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:42.881441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:42.881452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:68240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:42.881461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:42.881472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:42.881481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:42.881491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:68968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x184000 00:28:22.112 [2024-11-20 16:18:42.881500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54104 cdw0:1551d000 sqhd:aba4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:42.883511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.112 [2024-11-20 16:18:42.883529] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.112 [2024-11-20 16:18:42.883538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:68264 len:8 PRP1 0x0 PRP2 0x0 00:28:22.112 [2024-11-20 16:18:42.883553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:42.883591] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:22.112 [2024-11-20 16:18:42.883603] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4421 to 192.168.100.8:4422 00:28:22.112 [2024-11-20 16:18:42.883613] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.112 [2024-11-20 16:18:42.885454] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.112 [2024-11-20 16:18:42.899757] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:22.112 [2024-11-20 16:18:42.933995] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:22.112 [2024-11-20 16:18:47.289393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:119880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013891700 len:0x1000 key:0x183f00 00:28:22.112 [2024-11-20 16:18:47.289434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:119888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:119240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:119896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:119248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:119256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007558000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:119904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388b400 len:0x1000 key:0x183f00 00:28:22.112 [2024-11-20 16:18:47.289568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:119912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:119272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007564000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:119920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d2680 len:0x1000 key:0x183f00 00:28:22.112 [2024-11-20 16:18:47.289634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:119928 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x183f00 00:28:22.112 [2024-11-20 16:18:47.289654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:119936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:119288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:119296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007546000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:119304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007536000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:119944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x183f00 00:28:22.112 [2024-11-20 16:18:47.289757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:119320 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:119328 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007578000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:119952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:119960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:119968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b6900 len:0x1000 key:0x183f00 00:28:22.112 [2024-11-20 16:18:47.289862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:119976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:119984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:119344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:119352 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181400 00:28:22.112 [2024-11-20 16:18:47.289945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:119992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.112 [2024-11-20 16:18:47.289984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.289994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x183f00 00:28:22.112 [2024-11-20 16:18:47.290004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.112 [2024-11-20 16:18:47.290015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bbb80 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:119376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007504000 len:0x1000 key:0x181400 00:28:22.113 [2024-11-20 16:18:47.290043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f2600 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:119392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007518000 len:0x1000 key:0x181400 00:28:22.113 [2024-11-20 16:18:47.290104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b1680 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:119408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181400 00:28:22.113 [2024-11-20 16:18:47.290143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120064 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120072 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138aa300 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387cd00 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120120 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013879b80 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013878b00 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013876a00 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:119456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751a000 len:0x1000 key:0x181400 00:28:22.113 [2024-11-20 16:18:47.290418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f8900 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120168 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f6800 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:119472 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007500000 len:0x1000 key:0x181400 00:28:22.113 [2024-11-20 16:18:47.290543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:119480 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181400 00:28:22.113 [2024-11-20 16:18:47.290563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120192 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c8180 len:0x1000 key:0x183f00 00:28:22.113 [2024-11-20 16:18:47.290583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.113 [2024-11-20 16:18:47.290642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.113 [2024-11-20 16:18:47.290653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120224 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ed380 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.290662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:119512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.290682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:119520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753a000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.290701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:119528 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.290721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:120232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e9180 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.290741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a9280 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.290760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a8200 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.290781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:119544 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.290802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.290821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.290840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:119552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.290860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:119560 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.290879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120272 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.290899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:119576 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.290919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:119584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.290939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.290958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:120288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dfd00 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.290978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.290988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:120296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.290998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.291018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.291038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:119608 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.291058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.291077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:119616 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007526000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.291097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:119624 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007508000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.291117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:120328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.291136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:120336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.291155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:119648 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.291175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d4780 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.291194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:120352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.291214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c7100 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.291235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.291254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:120376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c5000 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.291273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:119680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007540000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.291293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.291312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013888280 len:0x1000 key:0x183f00 00:28:22.114 [2024-11-20 16:18:47.291336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:119696 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181400 00:28:22.114 [2024-11-20 16:18:47.291356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.114 [2024-11-20 16:18:47.291366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.114 [2024-11-20 16:18:47.291375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120416 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389fe00 len:0x1000 key:0x183f00 00:28:22.115 [2024-11-20 16:18:47.291416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:119720 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181400 00:28:22.115 [2024-11-20 16:18:47.291436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:119728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181400 00:28:22.115 [2024-11-20 16:18:47.291457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x183f00 00:28:22.115 [2024-11-20 16:18:47.291477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:119744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754a000 len:0x1000 key:0x181400 00:28:22.115 [2024-11-20 16:18:47.291519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:120456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:119776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181400 00:28:22.115 [2024-11-20 16:18:47.291618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:119792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181400 00:28:22.115 [2024-11-20 16:18:47.291660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120488 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387dd80 len:0x1000 key:0x183f00 00:28:22.115 [2024-11-20 16:18:47.291702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:119808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181400 00:28:22.115 [2024-11-20 16:18:47.291722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:119816 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181400 00:28:22.115 [2024-11-20 16:18:47.291741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013883000 len:0x1000 key:0x183f00 00:28:22.115 [2024-11-20 16:18:47.291761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x183f00 00:28:22.115 [2024-11-20 16:18:47.291801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:120528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:120536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x183f00 00:28:22.115 [2024-11-20 16:18:47.291859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120552 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x183f00 00:28:22.115 [2024-11-20 16:18:47.291897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:119848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181400 00:28:22.115 [2024-11-20 16:18:47.291918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:22.115 [2024-11-20 16:18:47.291939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013894880 len:0x1000 key:0x183f00 00:28:22.115 [2024-11-20 16:18:47.291958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.291970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:119872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181400 00:28:22.115 [2024-11-20 16:18:47.291979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54106 cdw0:1551d000 sqhd:3cc4 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.293944] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:22.115 [2024-11-20 16:18:47.293959] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:22.115 [2024-11-20 16:18:47.293967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120576 len:8 PRP1 0x0 PRP2 0x0 00:28:22.115 [2024-11-20 16:18:47.293977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:22.115 [2024-11-20 16:18:47.294014] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4980 was disconnected and freed. reset controller. 00:28:22.115 [2024-11-20 16:18:47.294026] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4422 to 192.168.100.8:4420 00:28:22.115 [2024-11-20 16:18:47.294037] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:22.115 [2024-11-20 16:18:47.295927] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:22.116 [2024-11-20 16:18:47.310347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:22.116 [2024-11-20 16:18:47.345458] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:22.116 00:28:22.116 Latency(us) 00:28:22.116 [2024-11-20T15:18:52.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.116 [2024-11-20T15:18:52.921Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:22.116 Verification LBA range: start 0x0 length 0x4000 00:28:22.116 NVMe0n1 : 15.00 20390.77 79.65 334.57 0.00 6163.13 326.04 1033476.51 00:28:22.116 [2024-11-20T15:18:52.921Z] =================================================================================================================== 00:28:22.116 [2024-11-20T15:18:52.921Z] Total : 20390.77 79.65 334.57 0.00 6163.13 326.04 1033476.51 00:28:22.116 Received shutdown signal, test time was about 15.000000 seconds 00:28:22.116 00:28:22.116 Latency(us) 00:28:22.116 [2024-11-20T15:18:52.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:22.116 [2024-11-20T15:18:52.921Z] =================================================================================================================== 00:28:22.116 [2024-11-20T15:18:52.921Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:22.116 16:18:52 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:28:22.116 16:18:52 -- host/failover.sh@65 -- # count=3 00:28:22.116 16:18:52 -- host/failover.sh@67 -- # (( count != 3 )) 00:28:22.116 16:18:52 -- host/failover.sh@73 -- # bdevperf_pid=1493330 00:28:22.116 16:18:52 -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:28:22.116 16:18:52 -- host/failover.sh@75 -- # waitforlisten 1493330 /var/tmp/bdevperf.sock 00:28:22.116 16:18:52 -- common/autotest_common.sh@829 -- # '[' -z 1493330 ']' 00:28:22.116 16:18:52 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:22.116 16:18:52 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.116 16:18:52 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:22.116 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:22.116 16:18:52 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.116 16:18:52 -- common/autotest_common.sh@10 -- # set +x 00:28:23.050 16:18:53 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:23.050 16:18:53 -- common/autotest_common.sh@862 -- # return 0 00:28:23.050 16:18:53 -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4421 00:28:23.050 [2024-11-20 16:18:53.683084] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4421 *** 00:28:23.050 16:18:53 -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4422 00:28:23.308 [2024-11-20 16:18:53.859713] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4422 *** 00:28:23.308 16:18:53 -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:23.565 NVMe0n1 00:28:23.565 16:18:54 -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:23.823 00:28:23.823 16:18:54 -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:24.082 00:28:24.082 16:18:54 -- host/failover.sh@82 -- # grep -q NVMe0 00:28:24.082 16:18:54 -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:24.082 16:18:54 -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:24.339 16:18:55 -- host/failover.sh@87 -- # sleep 3 00:28:27.692 16:18:58 -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:27.692 16:18:58 -- host/failover.sh@88 -- # grep -q NVMe0 00:28:27.692 16:18:58 -- host/failover.sh@90 -- # run_test_pid=1494253 00:28:27.692 16:18:58 -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:27.692 16:18:58 -- host/failover.sh@92 -- # wait 1494253 00:28:28.626 0 00:28:28.626 16:18:59 -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:28.626 [2024-11-20 16:18:52.713574] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:28.626 [2024-11-20 16:18:52.713632] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1493330 ] 00:28:28.626 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.626 [2024-11-20 16:18:52.785921] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:28.626 [2024-11-20 16:18:52.818639] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.626 [2024-11-20 16:18:54.999964] bdev_nvme.c:1843:bdev_nvme_failover_trid: *NOTICE*: Start failover from 192.168.100.8:4420 to 192.168.100.8:4421 00:28:28.626 [2024-11-20 16:18:55.000557] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:28.626 [2024-11-20 16:18:55.000590] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:28.626 [2024-11-20 16:18:55.017804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:28.626 [2024-11-20 16:18:55.034084] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:28.626 Running I/O for 1 seconds... 00:28:28.626 00:28:28.626 Latency(us) 00:28:28.626 [2024-11-20T15:18:59.431Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:28.626 [2024-11-20T15:18:59.431Z] Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:28.626 Verification LBA range: start 0x0 length 0x4000 00:28:28.626 NVMe0n1 : 1.00 25635.43 100.14 0.00 0.00 4969.18 1166.54 12845.06 00:28:28.626 [2024-11-20T15:18:59.431Z] =================================================================================================================== 00:28:28.626 [2024-11-20T15:18:59.431Z] Total : 25635.43 100.14 0.00 0.00 4969.18 1166.54 12845.06 00:28:28.626 16:18:59 -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:28.626 16:18:59 -- host/failover.sh@95 -- # grep -q NVMe0 00:28:28.884 16:18:59 -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:29.142 16:18:59 -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:29.142 16:18:59 -- host/failover.sh@99 -- # grep -q NVMe0 00:28:29.142 16:18:59 -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t rdma -a 192.168.100.8 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:29.400 16:19:00 -- host/failover.sh@101 -- # sleep 3 00:28:32.681 16:19:03 -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:32.681 16:19:03 -- host/failover.sh@103 -- # grep -q NVMe0 00:28:32.681 16:19:03 -- host/failover.sh@108 -- # killprocess 1493330 00:28:32.681 16:19:03 -- common/autotest_common.sh@936 -- # '[' -z 1493330 ']' 00:28:32.681 16:19:03 -- common/autotest_common.sh@940 -- # kill -0 1493330 00:28:32.681 16:19:03 -- common/autotest_common.sh@941 -- # uname 00:28:32.681 16:19:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:32.681 16:19:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1493330 00:28:32.681 16:19:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:32.681 16:19:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:32.681 16:19:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1493330' 00:28:32.681 killing process with pid 1493330 00:28:32.681 16:19:03 -- common/autotest_common.sh@955 -- # kill 1493330 00:28:32.681 16:19:03 -- common/autotest_common.sh@960 -- # wait 1493330 00:28:32.939 16:19:03 -- host/failover.sh@110 -- # sync 00:28:32.939 16:19:03 -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:33.197 16:19:03 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:28:33.198 16:19:03 -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:33.198 16:19:03 -- host/failover.sh@116 -- # nvmftestfini 00:28:33.198 16:19:03 -- nvmf/common.sh@476 -- # nvmfcleanup 00:28:33.198 16:19:03 -- nvmf/common.sh@116 -- # sync 00:28:33.198 16:19:03 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:28:33.198 16:19:03 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:28:33.198 16:19:03 -- nvmf/common.sh@119 -- # set +e 00:28:33.198 16:19:03 -- nvmf/common.sh@120 -- # for i in {1..20} 00:28:33.198 16:19:03 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:28:33.198 rmmod nvme_rdma 00:28:33.198 rmmod nvme_fabrics 00:28:33.198 16:19:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:28:33.198 16:19:03 -- nvmf/common.sh@123 -- # set -e 00:28:33.198 16:19:03 -- nvmf/common.sh@124 -- # return 0 00:28:33.198 16:19:03 -- nvmf/common.sh@477 -- # '[' -n 1490083 ']' 00:28:33.198 16:19:03 -- nvmf/common.sh@478 -- # killprocess 1490083 00:28:33.198 16:19:03 -- common/autotest_common.sh@936 -- # '[' -z 1490083 ']' 00:28:33.198 16:19:03 -- common/autotest_common.sh@940 -- # kill -0 1490083 00:28:33.198 16:19:03 -- common/autotest_common.sh@941 -- # uname 00:28:33.198 16:19:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:33.198 16:19:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1490083 00:28:33.198 16:19:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:33.198 16:19:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:33.198 16:19:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1490083' 00:28:33.198 killing process with pid 1490083 00:28:33.198 16:19:03 -- common/autotest_common.sh@955 -- # kill 1490083 00:28:33.198 16:19:03 -- common/autotest_common.sh@960 -- # wait 1490083 00:28:33.456 16:19:04 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:28:33.456 16:19:04 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:28:33.456 00:28:33.456 real 0m37.085s 00:28:33.456 user 2m4.611s 00:28:33.456 sys 0m7.187s 00:28:33.456 16:19:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:33.456 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.456 ************************************ 00:28:33.456 END TEST nvmf_failover 00:28:33.456 ************************************ 00:28:33.456 16:19:04 -- nvmf/nvmf.sh@101 -- # run_test nvmf_discovery /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:33.456 16:19:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:33.456 16:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:33.456 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.456 ************************************ 00:28:33.456 START TEST nvmf_discovery 00:28:33.456 ************************************ 00:28:33.456 16:19:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=rdma 00:28:33.715 * Looking for test storage... 00:28:33.715 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:33.715 16:19:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:33.715 16:19:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:33.715 16:19:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:33.715 16:19:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:33.715 16:19:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:33.715 16:19:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:33.715 16:19:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:33.715 16:19:04 -- scripts/common.sh@335 -- # IFS=.-: 00:28:33.715 16:19:04 -- scripts/common.sh@335 -- # read -ra ver1 00:28:33.715 16:19:04 -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.715 16:19:04 -- scripts/common.sh@336 -- # read -ra ver2 00:28:33.715 16:19:04 -- scripts/common.sh@337 -- # local 'op=<' 00:28:33.715 16:19:04 -- scripts/common.sh@339 -- # ver1_l=2 00:28:33.715 16:19:04 -- scripts/common.sh@340 -- # ver2_l=1 00:28:33.715 16:19:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:33.715 16:19:04 -- scripts/common.sh@343 -- # case "$op" in 00:28:33.715 16:19:04 -- scripts/common.sh@344 -- # : 1 00:28:33.715 16:19:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:33.715 16:19:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.715 16:19:04 -- scripts/common.sh@364 -- # decimal 1 00:28:33.715 16:19:04 -- scripts/common.sh@352 -- # local d=1 00:28:33.715 16:19:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.715 16:19:04 -- scripts/common.sh@354 -- # echo 1 00:28:33.715 16:19:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:33.715 16:19:04 -- scripts/common.sh@365 -- # decimal 2 00:28:33.715 16:19:04 -- scripts/common.sh@352 -- # local d=2 00:28:33.715 16:19:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.715 16:19:04 -- scripts/common.sh@354 -- # echo 2 00:28:33.715 16:19:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:33.715 16:19:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:33.715 16:19:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:33.715 16:19:04 -- scripts/common.sh@367 -- # return 0 00:28:33.715 16:19:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.715 16:19:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:33.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.715 --rc genhtml_branch_coverage=1 00:28:33.715 --rc genhtml_function_coverage=1 00:28:33.715 --rc genhtml_legend=1 00:28:33.715 --rc geninfo_all_blocks=1 00:28:33.715 --rc geninfo_unexecuted_blocks=1 00:28:33.715 00:28:33.715 ' 00:28:33.715 16:19:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:33.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.715 --rc genhtml_branch_coverage=1 00:28:33.715 --rc genhtml_function_coverage=1 00:28:33.715 --rc genhtml_legend=1 00:28:33.715 --rc geninfo_all_blocks=1 00:28:33.715 --rc geninfo_unexecuted_blocks=1 00:28:33.715 00:28:33.715 ' 00:28:33.715 16:19:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:33.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.715 --rc genhtml_branch_coverage=1 00:28:33.715 --rc genhtml_function_coverage=1 00:28:33.715 --rc genhtml_legend=1 00:28:33.715 --rc geninfo_all_blocks=1 00:28:33.715 --rc geninfo_unexecuted_blocks=1 00:28:33.715 00:28:33.715 ' 00:28:33.715 16:19:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:33.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.715 --rc genhtml_branch_coverage=1 00:28:33.715 --rc genhtml_function_coverage=1 00:28:33.715 --rc genhtml_legend=1 00:28:33.715 --rc geninfo_all_blocks=1 00:28:33.715 --rc geninfo_unexecuted_blocks=1 00:28:33.715 00:28:33.715 ' 00:28:33.715 16:19:04 -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.715 16:19:04 -- nvmf/common.sh@7 -- # uname -s 00:28:33.715 16:19:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.715 16:19:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.716 16:19:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.716 16:19:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.716 16:19:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.716 16:19:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.716 16:19:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.716 16:19:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.716 16:19:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.716 16:19:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.716 16:19:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:33.716 16:19:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:33.716 16:19:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.716 16:19:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.716 16:19:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.716 16:19:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:33.716 16:19:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.716 16:19:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.716 16:19:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.716 16:19:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.716 16:19:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.716 16:19:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.716 16:19:04 -- paths/export.sh@5 -- # export PATH 00:28:33.716 16:19:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.716 16:19:04 -- nvmf/common.sh@46 -- # : 0 00:28:33.716 16:19:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:33.716 16:19:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:33.716 16:19:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:33.716 16:19:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.716 16:19:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.716 16:19:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:33.716 16:19:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:33.716 16:19:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:33.716 16:19:04 -- host/discovery.sh@11 -- # '[' rdma == rdma ']' 00:28:33.716 16:19:04 -- host/discovery.sh@12 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:33.716 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:33.716 16:19:04 -- host/discovery.sh@13 -- # exit 0 00:28:33.716 00:28:33.716 real 0m0.221s 00:28:33.716 user 0m0.131s 00:28:33.716 sys 0m0.106s 00:28:33.716 16:19:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:33.716 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.716 ************************************ 00:28:33.716 END TEST nvmf_discovery 00:28:33.716 ************************************ 00:28:33.716 16:19:04 -- nvmf/nvmf.sh@102 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:33.716 16:19:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:33.716 16:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:33.716 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.716 ************************************ 00:28:33.716 START TEST nvmf_discovery_remove_ifc 00:28:33.716 ************************************ 00:28:33.716 16:19:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=rdma 00:28:33.716 * Looking for test storage... 00:28:33.975 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:33.975 16:19:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:33.975 16:19:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:33.975 16:19:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:33.975 16:19:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:33.975 16:19:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:33.975 16:19:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:33.975 16:19:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:33.975 16:19:04 -- scripts/common.sh@335 -- # IFS=.-: 00:28:33.975 16:19:04 -- scripts/common.sh@335 -- # read -ra ver1 00:28:33.975 16:19:04 -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.975 16:19:04 -- scripts/common.sh@336 -- # read -ra ver2 00:28:33.975 16:19:04 -- scripts/common.sh@337 -- # local 'op=<' 00:28:33.975 16:19:04 -- scripts/common.sh@339 -- # ver1_l=2 00:28:33.975 16:19:04 -- scripts/common.sh@340 -- # ver2_l=1 00:28:33.975 16:19:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:33.975 16:19:04 -- scripts/common.sh@343 -- # case "$op" in 00:28:33.975 16:19:04 -- scripts/common.sh@344 -- # : 1 00:28:33.975 16:19:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:33.975 16:19:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.975 16:19:04 -- scripts/common.sh@364 -- # decimal 1 00:28:33.975 16:19:04 -- scripts/common.sh@352 -- # local d=1 00:28:33.975 16:19:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.975 16:19:04 -- scripts/common.sh@354 -- # echo 1 00:28:33.975 16:19:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:33.975 16:19:04 -- scripts/common.sh@365 -- # decimal 2 00:28:33.975 16:19:04 -- scripts/common.sh@352 -- # local d=2 00:28:33.975 16:19:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.975 16:19:04 -- scripts/common.sh@354 -- # echo 2 00:28:33.975 16:19:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:33.975 16:19:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:33.975 16:19:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:33.975 16:19:04 -- scripts/common.sh@367 -- # return 0 00:28:33.975 16:19:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.975 16:19:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.975 --rc genhtml_branch_coverage=1 00:28:33.975 --rc genhtml_function_coverage=1 00:28:33.975 --rc genhtml_legend=1 00:28:33.975 --rc geninfo_all_blocks=1 00:28:33.975 --rc geninfo_unexecuted_blocks=1 00:28:33.975 00:28:33.975 ' 00:28:33.975 16:19:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.975 --rc genhtml_branch_coverage=1 00:28:33.975 --rc genhtml_function_coverage=1 00:28:33.975 --rc genhtml_legend=1 00:28:33.975 --rc geninfo_all_blocks=1 00:28:33.975 --rc geninfo_unexecuted_blocks=1 00:28:33.975 00:28:33.975 ' 00:28:33.975 16:19:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.975 --rc genhtml_branch_coverage=1 00:28:33.975 --rc genhtml_function_coverage=1 00:28:33.975 --rc genhtml_legend=1 00:28:33.975 --rc geninfo_all_blocks=1 00:28:33.975 --rc geninfo_unexecuted_blocks=1 00:28:33.975 00:28:33.975 ' 00:28:33.975 16:19:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:33.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.975 --rc genhtml_branch_coverage=1 00:28:33.975 --rc genhtml_function_coverage=1 00:28:33.975 --rc genhtml_legend=1 00:28:33.975 --rc geninfo_all_blocks=1 00:28:33.975 --rc geninfo_unexecuted_blocks=1 00:28:33.975 00:28:33.975 ' 00:28:33.975 16:19:04 -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.975 16:19:04 -- nvmf/common.sh@7 -- # uname -s 00:28:33.975 16:19:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.975 16:19:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.975 16:19:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.975 16:19:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.975 16:19:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:33.975 16:19:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:33.975 16:19:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.976 16:19:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:33.976 16:19:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.976 16:19:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:33.976 16:19:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:33.976 16:19:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:33.976 16:19:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.976 16:19:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:33.976 16:19:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:33.976 16:19:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:33.976 16:19:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.976 16:19:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.976 16:19:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.976 16:19:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.976 16:19:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.976 16:19:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.976 16:19:04 -- paths/export.sh@5 -- # export PATH 00:28:33.976 16:19:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.976 16:19:04 -- nvmf/common.sh@46 -- # : 0 00:28:33.976 16:19:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:33.976 16:19:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:33.976 16:19:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:33.976 16:19:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.976 16:19:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.976 16:19:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:33.976 16:19:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:33.976 16:19:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:33.976 16:19:04 -- host/discovery_remove_ifc.sh@14 -- # '[' rdma == rdma ']' 00:28:33.976 16:19:04 -- host/discovery_remove_ifc.sh@15 -- # echo 'Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target.' 00:28:33.976 Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target. 00:28:33.976 16:19:04 -- host/discovery_remove_ifc.sh@16 -- # exit 0 00:28:33.976 00:28:33.976 real 0m0.211s 00:28:33.976 user 0m0.119s 00:28:33.976 sys 0m0.109s 00:28:33.976 16:19:04 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:28:33.976 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.976 ************************************ 00:28:33.976 END TEST nvmf_discovery_remove_ifc 00:28:33.976 ************************************ 00:28:33.976 16:19:04 -- nvmf/nvmf.sh@106 -- # [[ rdma == \t\c\p ]] 00:28:33.976 16:19:04 -- nvmf/nvmf.sh@110 -- # [[ 0 -eq 1 ]] 00:28:33.976 16:19:04 -- nvmf/nvmf.sh@115 -- # [[ 0 -eq 1 ]] 00:28:33.976 16:19:04 -- nvmf/nvmf.sh@120 -- # [[ phy == phy ]] 00:28:33.976 16:19:04 -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:33.976 16:19:04 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:33.976 16:19:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:33.976 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:28:33.976 ************************************ 00:28:33.976 START TEST nvmf_bdevperf 00:28:33.976 ************************************ 00:28:33.976 16:19:04 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=rdma 00:28:34.236 * Looking for test storage... 00:28:34.236 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:28:34.236 16:19:04 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:28:34.236 16:19:04 -- common/autotest_common.sh@1690 -- # lcov --version 00:28:34.236 16:19:04 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:28:34.236 16:19:04 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:28:34.236 16:19:04 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:28:34.236 16:19:04 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:28:34.236 16:19:04 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:28:34.236 16:19:04 -- scripts/common.sh@335 -- # IFS=.-: 00:28:34.236 16:19:04 -- scripts/common.sh@335 -- # read -ra ver1 00:28:34.236 16:19:04 -- scripts/common.sh@336 -- # IFS=.-: 00:28:34.236 16:19:04 -- scripts/common.sh@336 -- # read -ra ver2 00:28:34.236 16:19:04 -- scripts/common.sh@337 -- # local 'op=<' 00:28:34.236 16:19:04 -- scripts/common.sh@339 -- # ver1_l=2 00:28:34.236 16:19:04 -- scripts/common.sh@340 -- # ver2_l=1 00:28:34.236 16:19:04 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:28:34.236 16:19:04 -- scripts/common.sh@343 -- # case "$op" in 00:28:34.236 16:19:04 -- scripts/common.sh@344 -- # : 1 00:28:34.236 16:19:04 -- scripts/common.sh@363 -- # (( v = 0 )) 00:28:34.236 16:19:04 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:34.236 16:19:04 -- scripts/common.sh@364 -- # decimal 1 00:28:34.236 16:19:04 -- scripts/common.sh@352 -- # local d=1 00:28:34.236 16:19:04 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:34.236 16:19:04 -- scripts/common.sh@354 -- # echo 1 00:28:34.236 16:19:04 -- scripts/common.sh@364 -- # ver1[v]=1 00:28:34.236 16:19:04 -- scripts/common.sh@365 -- # decimal 2 00:28:34.236 16:19:04 -- scripts/common.sh@352 -- # local d=2 00:28:34.236 16:19:04 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:34.236 16:19:04 -- scripts/common.sh@354 -- # echo 2 00:28:34.236 16:19:04 -- scripts/common.sh@365 -- # ver2[v]=2 00:28:34.236 16:19:04 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:28:34.236 16:19:04 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:28:34.236 16:19:04 -- scripts/common.sh@367 -- # return 0 00:28:34.236 16:19:04 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:34.236 16:19:04 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:28:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.236 --rc genhtml_branch_coverage=1 00:28:34.236 --rc genhtml_function_coverage=1 00:28:34.236 --rc genhtml_legend=1 00:28:34.236 --rc geninfo_all_blocks=1 00:28:34.236 --rc geninfo_unexecuted_blocks=1 00:28:34.236 00:28:34.236 ' 00:28:34.236 16:19:04 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:28:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.236 --rc genhtml_branch_coverage=1 00:28:34.236 --rc genhtml_function_coverage=1 00:28:34.236 --rc genhtml_legend=1 00:28:34.236 --rc geninfo_all_blocks=1 00:28:34.236 --rc geninfo_unexecuted_blocks=1 00:28:34.236 00:28:34.236 ' 00:28:34.236 16:19:04 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:28:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.236 --rc genhtml_branch_coverage=1 00:28:34.236 --rc genhtml_function_coverage=1 00:28:34.236 --rc genhtml_legend=1 00:28:34.236 --rc geninfo_all_blocks=1 00:28:34.236 --rc geninfo_unexecuted_blocks=1 00:28:34.236 00:28:34.236 ' 00:28:34.236 16:19:04 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:28:34.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:34.236 --rc genhtml_branch_coverage=1 00:28:34.236 --rc genhtml_function_coverage=1 00:28:34.236 --rc genhtml_legend=1 00:28:34.236 --rc geninfo_all_blocks=1 00:28:34.236 --rc geninfo_unexecuted_blocks=1 00:28:34.236 00:28:34.236 ' 00:28:34.236 16:19:04 -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:28:34.236 16:19:04 -- nvmf/common.sh@7 -- # uname -s 00:28:34.236 16:19:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:34.236 16:19:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:34.236 16:19:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:34.236 16:19:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:34.236 16:19:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:34.236 16:19:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:34.236 16:19:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:34.236 16:19:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:34.236 16:19:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:34.236 16:19:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:34.236 16:19:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:28:34.236 16:19:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:28:34.236 16:19:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:34.236 16:19:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:34.236 16:19:04 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:34.236 16:19:04 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:28:34.236 16:19:04 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:34.236 16:19:04 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:34.236 16:19:04 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:34.236 16:19:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.236 16:19:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.236 16:19:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.236 16:19:04 -- paths/export.sh@5 -- # export PATH 00:28:34.236 16:19:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:34.236 16:19:04 -- nvmf/common.sh@46 -- # : 0 00:28:34.236 16:19:04 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:28:34.236 16:19:04 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:28:34.236 16:19:04 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:28:34.236 16:19:04 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:34.236 16:19:04 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:34.236 16:19:04 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:28:34.236 16:19:04 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:28:34.236 16:19:04 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:28:34.236 16:19:04 -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:34.236 16:19:04 -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:34.236 16:19:04 -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:34.236 16:19:04 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:28:34.236 16:19:04 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:34.236 16:19:04 -- nvmf/common.sh@436 -- # prepare_net_devs 00:28:34.236 16:19:04 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:28:34.236 16:19:04 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:28:34.236 16:19:04 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:34.236 16:19:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:34.236 16:19:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:34.236 16:19:04 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:28:34.236 16:19:04 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:28:34.236 16:19:04 -- nvmf/common.sh@284 -- # xtrace_disable 00:28:34.236 16:19:04 -- common/autotest_common.sh@10 -- # set +x 00:28:40.795 16:19:11 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:28:40.795 16:19:11 -- nvmf/common.sh@290 -- # pci_devs=() 00:28:40.795 16:19:11 -- nvmf/common.sh@290 -- # local -a pci_devs 00:28:40.795 16:19:11 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:28:40.795 16:19:11 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:28:40.795 16:19:11 -- nvmf/common.sh@292 -- # pci_drivers=() 00:28:40.795 16:19:11 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:28:40.795 16:19:11 -- nvmf/common.sh@294 -- # net_devs=() 00:28:40.795 16:19:11 -- nvmf/common.sh@294 -- # local -ga net_devs 00:28:40.795 16:19:11 -- nvmf/common.sh@295 -- # e810=() 00:28:40.795 16:19:11 -- nvmf/common.sh@295 -- # local -ga e810 00:28:40.795 16:19:11 -- nvmf/common.sh@296 -- # x722=() 00:28:40.795 16:19:11 -- nvmf/common.sh@296 -- # local -ga x722 00:28:40.795 16:19:11 -- nvmf/common.sh@297 -- # mlx=() 00:28:40.795 16:19:11 -- nvmf/common.sh@297 -- # local -ga mlx 00:28:40.795 16:19:11 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:40.795 16:19:11 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:28:40.795 16:19:11 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:28:40.795 16:19:11 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:28:40.795 16:19:11 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:28:40.795 16:19:11 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:28:40.795 16:19:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:40.795 16:19:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:28:40.795 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:28:40.795 16:19:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:40.795 16:19:11 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:28:40.795 16:19:11 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:28:40.795 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:28:40.795 16:19:11 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:28:40.795 16:19:11 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:28:40.795 16:19:11 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:28:40.795 16:19:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:40.795 16:19:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.796 16:19:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:40.796 16:19:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.796 16:19:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:28:40.796 Found net devices under 0000:d9:00.0: mlx_0_0 00:28:40.796 16:19:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.796 16:19:11 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:40.796 16:19:11 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:28:40.796 16:19:11 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:40.796 16:19:11 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:28:40.796 Found net devices under 0000:d9:00.1: mlx_0_1 00:28:40.796 16:19:11 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:28:40.796 16:19:11 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:28:40.796 16:19:11 -- nvmf/common.sh@402 -- # is_hw=yes 00:28:40.796 16:19:11 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@408 -- # rdma_device_init 00:28:40.796 16:19:11 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:28:40.796 16:19:11 -- nvmf/common.sh@57 -- # uname 00:28:40.796 16:19:11 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:28:40.796 16:19:11 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:28:40.796 16:19:11 -- nvmf/common.sh@62 -- # modprobe ib_core 00:28:40.796 16:19:11 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:28:40.796 16:19:11 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:28:40.796 16:19:11 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:28:40.796 16:19:11 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:28:40.796 16:19:11 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:28:40.796 16:19:11 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:28:40.796 16:19:11 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:28:40.796 16:19:11 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:28:40.796 16:19:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:40.796 16:19:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:40.796 16:19:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:40.796 16:19:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:40.796 16:19:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:40.796 16:19:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:40.796 16:19:11 -- nvmf/common.sh@104 -- # continue 2 00:28:40.796 16:19:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:40.796 16:19:11 -- nvmf/common.sh@104 -- # continue 2 00:28:40.796 16:19:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:40.796 16:19:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:28:40.796 16:19:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:40.796 16:19:11 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:28:40.796 16:19:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:28:40.796 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:40.796 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:28:40.796 altname enp217s0f0np0 00:28:40.796 altname ens818f0np0 00:28:40.796 inet 192.168.100.8/24 scope global mlx_0_0 00:28:40.796 valid_lft forever preferred_lft forever 00:28:40.796 16:19:11 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:28:40.796 16:19:11 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:28:40.796 16:19:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:40.796 16:19:11 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:28:40.796 16:19:11 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:28:40.796 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:28:40.796 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:28:40.796 altname enp217s0f1np1 00:28:40.796 altname ens818f1np1 00:28:40.796 inet 192.168.100.9/24 scope global mlx_0_1 00:28:40.796 valid_lft forever preferred_lft forever 00:28:40.796 16:19:11 -- nvmf/common.sh@410 -- # return 0 00:28:40.796 16:19:11 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:28:40.796 16:19:11 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:28:40.796 16:19:11 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:28:40.796 16:19:11 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:28:40.796 16:19:11 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:28:40.796 16:19:11 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:28:40.796 16:19:11 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:28:40.796 16:19:11 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:28:40.796 16:19:11 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:28:40.796 16:19:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:28:40.796 16:19:11 -- nvmf/common.sh@104 -- # continue 2 00:28:40.796 16:19:11 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:28:40.796 16:19:11 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:28:40.796 16:19:11 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:28:40.796 16:19:11 -- nvmf/common.sh@104 -- # continue 2 00:28:40.796 16:19:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:40.796 16:19:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:28:40.796 16:19:11 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:40.796 16:19:11 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:28:40.796 16:19:11 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:28:40.796 16:19:11 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:28:40.796 16:19:11 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:28:40.796 16:19:11 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:28:40.796 192.168.100.9' 00:28:40.796 16:19:11 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:28:40.796 192.168.100.9' 00:28:40.796 16:19:11 -- nvmf/common.sh@445 -- # head -n 1 00:28:40.796 16:19:11 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:28:40.796 16:19:11 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:28:40.796 192.168.100.9' 00:28:40.796 16:19:11 -- nvmf/common.sh@446 -- # tail -n +2 00:28:40.796 16:19:11 -- nvmf/common.sh@446 -- # head -n 1 00:28:40.796 16:19:11 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:28:40.796 16:19:11 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:28:40.796 16:19:11 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:28:40.796 16:19:11 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:28:40.796 16:19:11 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:28:40.796 16:19:11 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:28:40.796 16:19:11 -- host/bdevperf.sh@25 -- # tgt_init 00:28:40.796 16:19:11 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:40.796 16:19:11 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:40.796 16:19:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:40.796 16:19:11 -- common/autotest_common.sh@10 -- # set +x 00:28:40.796 16:19:11 -- nvmf/common.sh@469 -- # nvmfpid=1498627 00:28:40.796 16:19:11 -- nvmf/common.sh@470 -- # waitforlisten 1498627 00:28:40.796 16:19:11 -- common/autotest_common.sh@829 -- # '[' -z 1498627 ']' 00:28:40.796 16:19:11 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:40.796 16:19:11 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:40.796 16:19:11 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:40.796 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:40.796 16:19:11 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:40.796 16:19:11 -- common/autotest_common.sh@10 -- # set +x 00:28:40.796 16:19:11 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:40.796 [2024-11-20 16:19:11.306473] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:40.796 [2024-11-20 16:19:11.306527] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:40.796 EAL: No free 2048 kB hugepages reported on node 1 00:28:40.796 [2024-11-20 16:19:11.376742] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:40.796 [2024-11-20 16:19:11.413489] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:40.796 [2024-11-20 16:19:11.413621] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:40.796 [2024-11-20 16:19:11.413631] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:40.796 [2024-11-20 16:19:11.413639] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:40.797 [2024-11-20 16:19:11.413743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:40.797 [2024-11-20 16:19:11.413826] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:40.797 [2024-11-20 16:19:11.413828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:41.361 16:19:12 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:41.361 16:19:12 -- common/autotest_common.sh@862 -- # return 0 00:28:41.361 16:19:12 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:41.361 16:19:12 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:41.361 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:41.619 16:19:12 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:41.619 16:19:12 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:41.619 16:19:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.619 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:41.619 [2024-11-20 16:19:12.201797] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x2179900/0x217ddb0) succeed. 00:28:41.619 [2024-11-20 16:19:12.211070] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x217ae00/0x21bf450) succeed. 00:28:41.619 16:19:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.619 16:19:12 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:41.619 16:19:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.619 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:41.619 Malloc0 00:28:41.619 16:19:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.619 16:19:12 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:41.619 16:19:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.619 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:41.619 16:19:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.619 16:19:12 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:41.619 16:19:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.619 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:41.619 16:19:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.619 16:19:12 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:41.619 16:19:12 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:41.619 16:19:12 -- common/autotest_common.sh@10 -- # set +x 00:28:41.619 [2024-11-20 16:19:12.356278] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:41.619 16:19:12 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:41.619 16:19:12 -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:41.619 16:19:12 -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:41.619 16:19:12 -- nvmf/common.sh@520 -- # config=() 00:28:41.619 16:19:12 -- nvmf/common.sh@520 -- # local subsystem config 00:28:41.619 16:19:12 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:41.619 16:19:12 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:41.619 { 00:28:41.619 "params": { 00:28:41.619 "name": "Nvme$subsystem", 00:28:41.619 "trtype": "$TEST_TRANSPORT", 00:28:41.619 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:41.619 "adrfam": "ipv4", 00:28:41.619 "trsvcid": "$NVMF_PORT", 00:28:41.619 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:41.619 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:41.619 "hdgst": ${hdgst:-false}, 00:28:41.619 "ddgst": ${ddgst:-false} 00:28:41.619 }, 00:28:41.619 "method": "bdev_nvme_attach_controller" 00:28:41.619 } 00:28:41.619 EOF 00:28:41.619 )") 00:28:41.619 16:19:12 -- nvmf/common.sh@542 -- # cat 00:28:41.619 16:19:12 -- nvmf/common.sh@544 -- # jq . 00:28:41.619 16:19:12 -- nvmf/common.sh@545 -- # IFS=, 00:28:41.619 16:19:12 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:41.619 "params": { 00:28:41.619 "name": "Nvme1", 00:28:41.619 "trtype": "rdma", 00:28:41.619 "traddr": "192.168.100.8", 00:28:41.619 "adrfam": "ipv4", 00:28:41.619 "trsvcid": "4420", 00:28:41.619 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:41.619 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:41.619 "hdgst": false, 00:28:41.619 "ddgst": false 00:28:41.619 }, 00:28:41.619 "method": "bdev_nvme_attach_controller" 00:28:41.619 }' 00:28:41.619 [2024-11-20 16:19:12.406931] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:41.619 [2024-11-20 16:19:12.406980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498911 ] 00:28:41.877 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.877 [2024-11-20 16:19:12.478654] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.877 [2024-11-20 16:19:12.515511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:42.135 Running I/O for 1 seconds... 00:28:43.068 00:28:43.068 Latency(us) 00:28:43.068 [2024-11-20T15:19:13.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:43.068 [2024-11-20T15:19:13.873Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:43.068 Verification LBA range: start 0x0 length 0x4000 00:28:43.068 Nvme1n1 : 1.00 25678.92 100.31 0.00 0.00 4960.98 1173.09 11848.91 00:28:43.068 [2024-11-20T15:19:13.873Z] =================================================================================================================== 00:28:43.068 [2024-11-20T15:19:13.873Z] Total : 25678.92 100.31 0.00 0.00 4960.98 1173.09 11848.91 00:28:43.325 16:19:13 -- host/bdevperf.sh@30 -- # bdevperfpid=1499189 00:28:43.325 16:19:13 -- host/bdevperf.sh@32 -- # sleep 3 00:28:43.325 16:19:13 -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:43.325 16:19:13 -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:43.325 16:19:13 -- nvmf/common.sh@520 -- # config=() 00:28:43.325 16:19:13 -- nvmf/common.sh@520 -- # local subsystem config 00:28:43.325 16:19:13 -- nvmf/common.sh@522 -- # for subsystem in "${@:-1}" 00:28:43.325 16:19:13 -- nvmf/common.sh@542 -- # config+=("$(cat <<-EOF 00:28:43.325 { 00:28:43.325 "params": { 00:28:43.325 "name": "Nvme$subsystem", 00:28:43.325 "trtype": "$TEST_TRANSPORT", 00:28:43.325 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.325 "adrfam": "ipv4", 00:28:43.325 "trsvcid": "$NVMF_PORT", 00:28:43.325 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.325 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.325 "hdgst": ${hdgst:-false}, 00:28:43.325 "ddgst": ${ddgst:-false} 00:28:43.325 }, 00:28:43.325 "method": "bdev_nvme_attach_controller" 00:28:43.325 } 00:28:43.325 EOF 00:28:43.325 )") 00:28:43.325 16:19:13 -- nvmf/common.sh@542 -- # cat 00:28:43.325 16:19:13 -- nvmf/common.sh@544 -- # jq . 00:28:43.325 16:19:13 -- nvmf/common.sh@545 -- # IFS=, 00:28:43.325 16:19:13 -- nvmf/common.sh@546 -- # printf '%s\n' '{ 00:28:43.325 "params": { 00:28:43.325 "name": "Nvme1", 00:28:43.325 "trtype": "rdma", 00:28:43.325 "traddr": "192.168.100.8", 00:28:43.325 "adrfam": "ipv4", 00:28:43.325 "trsvcid": "4420", 00:28:43.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:43.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:43.325 "hdgst": false, 00:28:43.325 "ddgst": false 00:28:43.325 }, 00:28:43.325 "method": "bdev_nvme_attach_controller" 00:28:43.325 }' 00:28:43.325 [2024-11-20 16:19:13.932357] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:43.326 [2024-11-20 16:19:13.932412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1499189 ] 00:28:43.326 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.326 [2024-11-20 16:19:14.003717] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:43.326 [2024-11-20 16:19:14.040001] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:43.584 Running I/O for 15 seconds... 00:28:46.110 16:19:16 -- host/bdevperf.sh@33 -- # kill -9 1498627 00:28:46.110 16:19:16 -- host/bdevperf.sh@35 -- # sleep 3 00:28:47.486 [2024-11-20 16:19:17.922633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:26976 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007534000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.922668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:27656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.486 [2024-11-20 16:19:17.922697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:26984 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007574000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.922719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:27664 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d1600 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.922738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:27000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007556000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.922758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:27008 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007550000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.922777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:27016 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754e000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.922796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:27672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.486 [2024-11-20 16:19:17.922816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:27680 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cc380 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.922835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:27688 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138cb300 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.922855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:27696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.486 [2024-11-20 16:19:17.922879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:27024 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751c000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.922899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:27704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.486 [2024-11-20 16:19:17.922920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:27712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.486 [2024-11-20 16:19:17.922939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:27032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007544000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.922958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:27720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.486 [2024-11-20 16:19:17.922977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.922987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:27728 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c3f80 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.922998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:27736 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c2f00 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:27744 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138c1e80 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:27048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756e000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.923056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:27752 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bfd80 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:27760 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bed00 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:27768 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bdc80 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:27776 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bcc00 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:27088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007512000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.923159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:27784 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138bab00 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:27792 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b9a80 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:27800 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b8a00 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:27808 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b7980 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:27096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007524000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.923255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:27816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.486 [2024-11-20 16:19:17.923274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:27104 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007576000 len:0x1000 key:0x181400 00:28:47.486 [2024-11-20 16:19:17.923293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:27824 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b3780 len:0x1000 key:0x184000 00:28:47.486 [2024-11-20 16:19:17.923312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.486 [2024-11-20 16:19:17.923322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:27832 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b2700 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:27848 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138b0600 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:27856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27864 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ae500 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:27872 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ad480 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:27880 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ac400 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:27888 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ab380 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:27896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:27144 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756a000 len:0x1000 key:0x181400 00:28:47.487 [2024-11-20 16:19:17.923508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:27152 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007522000 len:0x1000 key:0x181400 00:28:47.487 [2024-11-20 16:19:17.923532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:27160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000751e000 len:0x1000 key:0x181400 00:28:47.487 [2024-11-20 16:19:17.923553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:27904 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a6100 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:27176 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007570000 len:0x1000 key:0x181400 00:28:47.487 [2024-11-20 16:19:17.923599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:27184 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007502000 len:0x1000 key:0x181400 00:28:47.487 [2024-11-20 16:19:17.923620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:27912 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a2f80 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:27920 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138a1f00 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:27200 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750a000 len:0x1000 key:0x181400 00:28:47.487 [2024-11-20 16:19:17.923680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:27928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:27936 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ed80 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:27944 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389dd00 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:27952 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389cc80 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:27960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:27968 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001389ab80 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:27976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:27984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:27992 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013897a00 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:28000 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013896980 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:28008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:27248 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007510000 len:0x1000 key:0x181400 00:28:47.487 [2024-11-20 16:19:17.923918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:27256 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000753e000 len:0x1000 key:0x181400 00:28:47.487 [2024-11-20 16:19:17.923937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:28016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:28024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.487 [2024-11-20 16:19:17.923974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.923984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:28032 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013890680 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.923993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.924003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:28040 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388f600 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.924012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.924023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:27288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755a000 len:0x1000 key:0x181400 00:28:47.487 [2024-11-20 16:19:17.924032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.924042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28048 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388d500 len:0x1000 key:0x184000 00:28:47.487 [2024-11-20 16:19:17.924051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.487 [2024-11-20 16:19:17.924062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:27304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007542000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:27312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750e000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:28056 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001388a380 len:0x1000 key:0x184000 00:28:47.488 [2024-11-20 16:19:17.924108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:28064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:28072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:27344 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757c000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:28080 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013886180 len:0x1000 key:0x184000 00:28:47.488 [2024-11-20 16:19:17.924183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:28088 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013885100 len:0x1000 key:0x184000 00:28:47.488 [2024-11-20 16:19:17.924202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:27360 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755c000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:27368 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007572000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:27376 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000756c000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:28096 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200013880f00 len:0x1000 key:0x184000 00:28:47.488 [2024-11-20 16:19:17.924284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:28104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:28112 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ee00 len:0x1000 key:0x184000 00:28:47.488 [2024-11-20 16:19:17.924323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:27392 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000757a000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:28120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:27408 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000755e000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28128 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20001387ac00 len:0x1000 key:0x184000 00:28:47.488 [2024-11-20 16:19:17.924402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:28136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:27424 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007516000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:27432 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752a000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:27440 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007548000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:28144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:27456 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000754c000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27464 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000752c000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:28152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:28160 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138f5780 len:0x1000 key:0x184000 00:28:47.488 [2024-11-20 16:19:17.924590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:28168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:28176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:28184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:28192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:27496 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007530000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:28200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.488 [2024-11-20 16:19:17.924710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:27504 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007562000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:27512 len:8 SGL KEYED DATA BLOCK ADDRESS 0x20000750c000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:27520 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007514000 len:0x1000 key:0x181400 00:28:47.488 [2024-11-20 16:19:17.924768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.488 [2024-11-20 16:19:17.924780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.489 [2024-11-20 16:19:17.924789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:27536 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007568000 len:0x1000 key:0x181400 00:28:47.489 [2024-11-20 16:19:17.924809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:28216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.489 [2024-11-20 16:19:17.924829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:28224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.489 [2024-11-20 16:19:17.924848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:28232 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e7080 len:0x1000 key:0x184000 00:28:47.489 [2024-11-20 16:19:17.924870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:27568 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007520000 len:0x1000 key:0x181400 00:28:47.489 [2024-11-20 16:19:17.924891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:28240 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e4f80 len:0x1000 key:0x184000 00:28:47.489 [2024-11-20 16:19:17.924909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:27584 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007554000 len:0x1000 key:0x181400 00:28:47.489 [2024-11-20 16:19:17.924929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:28248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.489 [2024-11-20 16:19:17.924950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:28256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.489 [2024-11-20 16:19:17.924969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.924980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28264 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138e0d80 len:0x1000 key:0x184000 00:28:47.489 [2024-11-20 16:19:17.924989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27592 len:8 SGL KEYED DATA BLOCK ADDRESS 0x200007566000 len:0x1000 key:0x181400 00:28:47.489 [2024-11-20 16:19:17.925009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.489 [2024-11-20 16:19:17.925028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:28280 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138ddc00 len:0x1000 key:0x184000 00:28:47.489 [2024-11-20 16:19:17.925047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28288 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dcb80 len:0x1000 key:0x184000 00:28:47.489 [2024-11-20 16:19:17.925066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:28296 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138dbb00 len:0x1000 key:0x184000 00:28:47.489 [2024-11-20 16:19:17.925084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28304 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138daa80 len:0x1000 key:0x184000 00:28:47.489 [2024-11-20 16:19:17.925104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:28312 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d9a00 len:0x1000 key:0x184000 00:28:47.489 [2024-11-20 16:19:17.925123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.489 [2024-11-20 16:19:17.925141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:28328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:47.489 [2024-11-20 16:19:17.925164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.925174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:28336 len:8 SGL KEYED DATA BLOCK ADDRESS 0x2000138d6880 len:0x1000 key:0x184000 00:28:47.489 [2024-11-20 16:19:17.925183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:54124 cdw0:5d783000 sqhd:9b52 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.937243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:47.489 [2024-11-20 16:19:17.937265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:47.489 [2024-11-20 16:19:17.937277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:27648 len:8 PRP1 0x0 PRP2 0x0 00:28:47.489 [2024-11-20 16:19:17.937290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.937335] bdev_nvme.c:1590:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x2000192e4a40 was disconnected and freed. reset controller. 00:28:47.489 [2024-11-20 16:19:17.937370] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.489 [2024-11-20 16:19:17.937384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54124 cdw0:0 sqhd:9132 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.937398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.489 [2024-11-20 16:19:17.937409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54124 cdw0:0 sqhd:9132 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.937422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.489 [2024-11-20 16:19:17.937434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54124 cdw0:0 sqhd:9132 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.937446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:28:47.489 [2024-11-20 16:19:17.937458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:54124 cdw0:0 sqhd:9132 p:0 m:0 dnr:0 00:28:47.489 [2024-11-20 16:19:17.954229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:47.489 [2024-11-20 16:19:17.954248] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:47.489 [2024-11-20 16:19:17.954259] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:47.489 [2024-11-20 16:19:17.956010] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:47.489 [2024-11-20 16:19:17.958140] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:47.489 [2024-11-20 16:19:17.958160] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:47.489 [2024-11-20 16:19:17.958168] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:48.422 [2024-11-20 16:19:18.962021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:48.423 [2024-11-20 16:19:18.962082] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:48.423 [2024-11-20 16:19:18.962467] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:48.423 [2024-11-20 16:19:18.962479] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:48.423 [2024-11-20 16:19:18.962493] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:48.423 [2024-11-20 16:19:18.963895] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:48.423 [2024-11-20 16:19:18.964068] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:48.423 [2024-11-20 16:19:18.975578] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:48.423 [2024-11-20 16:19:18.977670] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:48.423 [2024-11-20 16:19:18.977691] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:48.423 [2024-11-20 16:19:18.977699] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:49.356 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1498627 Killed "${NVMF_APP[@]}" "$@" 00:28:49.356 16:19:19 -- host/bdevperf.sh@36 -- # tgt_init 00:28:49.356 16:19:19 -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:49.356 16:19:19 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:28:49.356 16:19:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:49.356 16:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:49.356 16:19:19 -- nvmf/common.sh@469 -- # nvmfpid=1500233 00:28:49.356 16:19:19 -- nvmf/common.sh@470 -- # waitforlisten 1500233 00:28:49.356 16:19:19 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:49.356 16:19:19 -- common/autotest_common.sh@829 -- # '[' -z 1500233 ']' 00:28:49.356 16:19:19 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:49.356 16:19:19 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:49.356 16:19:19 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:49.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:49.356 16:19:19 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:49.356 16:19:19 -- common/autotest_common.sh@10 -- # set +x 00:28:49.356 [2024-11-20 16:19:19.954385] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:28:49.356 [2024-11-20 16:19:19.954439] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:49.356 [2024-11-20 16:19:19.981610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:49.356 [2024-11-20 16:19:19.981636] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:49.356 [2024-11-20 16:19:19.981752] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:49.356 [2024-11-20 16:19:19.981763] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:49.356 [2024-11-20 16:19:19.981773] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:49.356 [2024-11-20 16:19:19.982896] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:49.356 [2024-11-20 16:19:19.983466] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:49.356 EAL: No free 2048 kB hugepages reported on node 1 00:28:49.356 [2024-11-20 16:19:19.994710] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:49.356 [2024-11-20 16:19:19.996820] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:28:49.356 [2024-11-20 16:19:19.996840] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:28:49.356 [2024-11-20 16:19:19.996848] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000192ed0c0 00:28:49.356 [2024-11-20 16:19:20.029325] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:49.356 [2024-11-20 16:19:20.068272] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:28:49.356 [2024-11-20 16:19:20.068383] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:49.356 [2024-11-20 16:19:20.068393] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:49.356 [2024-11-20 16:19:20.068403] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:49.356 [2024-11-20 16:19:20.068448] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:28:49.356 [2024-11-20 16:19:20.068548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:28:49.356 [2024-11-20 16:19:20.068550] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.288 16:19:20 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:50.288 16:19:20 -- common/autotest_common.sh@862 -- # return 0 00:28:50.288 16:19:20 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:28:50.288 16:19:20 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:50.288 16:19:20 -- common/autotest_common.sh@10 -- # set +x 00:28:50.288 16:19:20 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:50.288 16:19:20 -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 -u 8192 00:28:50.288 16:19:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.288 16:19:20 -- common/autotest_common.sh@10 -- # set +x 00:28:50.288 [2024-11-20 16:19:20.854412] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1a48900/0x1a4cdb0) succeed. 00:28:50.288 [2024-11-20 16:19:20.863859] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1a49e00/0x1a8e450) succeed. 00:28:50.288 16:19:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.288 16:19:20 -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:50.288 16:19:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.288 16:19:20 -- common/autotest_common.sh@10 -- # set +x 00:28:50.288 Malloc0 00:28:50.288 16:19:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.288 16:19:20 -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:50.288 16:19:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.288 16:19:20 -- common/autotest_common.sh@10 -- # set +x 00:28:50.288 16:19:20 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.288 16:19:20 -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:50.288 16:19:20 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.288 16:19:20 -- common/autotest_common.sh@10 -- # set +x 00:28:50.288 [2024-11-20 16:19:21.000721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:28:50.288 [2024-11-20 16:19:21.000750] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:50.288 [2024-11-20 16:19:21.000852] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:50.288 [2024-11-20 16:19:21.000863] nvme_ctrlr.c:1737:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:50.288 [2024-11-20 16:19:21.000874] nvme_ctrlr.c:1017:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] already in failed state 00:28:50.288 [2024-11-20 16:19:21.002382] bdev_nvme.c:2867:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:50.288 [2024-11-20 16:19:21.002658] bdev_nvme.c:2038:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:50.289 16:19:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.289 16:19:21 -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:28:50.289 16:19:21 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:50.289 16:19:21 -- common/autotest_common.sh@10 -- # set +x 00:28:50.289 [2024-11-20 16:19:21.009376] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:28:50.289 16:19:21 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:50.289 [2024-11-20 16:19:21.014092] nvme_ctrlr.c:1639:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:50.289 16:19:21 -- host/bdevperf.sh@38 -- # wait 1499189 00:28:50.289 [2024-11-20 16:19:21.050763] bdev_nvme.c:2040:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:29:00.253 00:29:00.253 Latency(us) 00:29:00.253 [2024-11-20T15:19:31.058Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.253 [2024-11-20T15:19:31.058Z] Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:00.253 Verification LBA range: start 0x0 length 0x4000 00:29:00.253 Nvme1n1 : 15.00 18647.23 72.84 16424.02 0.00 3638.14 465.31 1060320.05 00:29:00.253 [2024-11-20T15:19:31.058Z] =================================================================================================================== 00:29:00.253 [2024-11-20T15:19:31.058Z] Total : 18647.23 72.84 16424.02 0.00 3638.14 465.31 1060320.05 00:29:00.253 16:19:29 -- host/bdevperf.sh@39 -- # sync 00:29:00.253 16:19:29 -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:00.253 16:19:29 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:00.253 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:29:00.253 16:19:29 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:00.253 16:19:29 -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:00.253 16:19:29 -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:00.253 16:19:29 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:00.253 16:19:29 -- nvmf/common.sh@116 -- # sync 00:29:00.253 16:19:29 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:00.253 16:19:29 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:00.253 16:19:29 -- nvmf/common.sh@119 -- # set +e 00:29:00.253 16:19:29 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:00.253 16:19:29 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:00.253 rmmod nvme_rdma 00:29:00.253 rmmod nvme_fabrics 00:29:00.253 16:19:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:00.253 16:19:29 -- nvmf/common.sh@123 -- # set -e 00:29:00.253 16:19:29 -- nvmf/common.sh@124 -- # return 0 00:29:00.253 16:19:29 -- nvmf/common.sh@477 -- # '[' -n 1500233 ']' 00:29:00.253 16:19:29 -- nvmf/common.sh@478 -- # killprocess 1500233 00:29:00.253 16:19:29 -- common/autotest_common.sh@936 -- # '[' -z 1500233 ']' 00:29:00.253 16:19:29 -- common/autotest_common.sh@940 -- # kill -0 1500233 00:29:00.253 16:19:29 -- common/autotest_common.sh@941 -- # uname 00:29:00.253 16:19:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:00.253 16:19:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1500233 00:29:00.253 16:19:29 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:00.253 16:19:29 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:00.253 16:19:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1500233' 00:29:00.253 killing process with pid 1500233 00:29:00.253 16:19:29 -- common/autotest_common.sh@955 -- # kill 1500233 00:29:00.253 16:19:29 -- common/autotest_common.sh@960 -- # wait 1500233 00:29:00.253 16:19:29 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:00.253 16:19:29 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:00.253 00:29:00.253 real 0m25.132s 00:29:00.253 user 1m4.241s 00:29:00.253 sys 0m6.123s 00:29:00.253 16:19:29 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:00.253 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:29:00.253 ************************************ 00:29:00.253 END TEST nvmf_bdevperf 00:29:00.253 ************************************ 00:29:00.253 16:19:29 -- nvmf/nvmf.sh@124 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:29:00.253 16:19:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:00.253 16:19:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:00.253 16:19:29 -- common/autotest_common.sh@10 -- # set +x 00:29:00.253 ************************************ 00:29:00.253 START TEST nvmf_target_disconnect 00:29:00.253 ************************************ 00:29:00.253 16:19:29 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=rdma 00:29:00.253 * Looking for test storage... 00:29:00.253 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host 00:29:00.253 16:19:29 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:00.253 16:19:29 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:00.253 16:19:29 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:00.253 16:19:30 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:00.253 16:19:30 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:00.253 16:19:30 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:00.253 16:19:30 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:00.253 16:19:30 -- scripts/common.sh@335 -- # IFS=.-: 00:29:00.253 16:19:30 -- scripts/common.sh@335 -- # read -ra ver1 00:29:00.253 16:19:30 -- scripts/common.sh@336 -- # IFS=.-: 00:29:00.253 16:19:30 -- scripts/common.sh@336 -- # read -ra ver2 00:29:00.253 16:19:30 -- scripts/common.sh@337 -- # local 'op=<' 00:29:00.253 16:19:30 -- scripts/common.sh@339 -- # ver1_l=2 00:29:00.253 16:19:30 -- scripts/common.sh@340 -- # ver2_l=1 00:29:00.253 16:19:30 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:00.253 16:19:30 -- scripts/common.sh@343 -- # case "$op" in 00:29:00.253 16:19:30 -- scripts/common.sh@344 -- # : 1 00:29:00.253 16:19:30 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:00.253 16:19:30 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:00.253 16:19:30 -- scripts/common.sh@364 -- # decimal 1 00:29:00.253 16:19:30 -- scripts/common.sh@352 -- # local d=1 00:29:00.253 16:19:30 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:00.253 16:19:30 -- scripts/common.sh@354 -- # echo 1 00:29:00.253 16:19:30 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:00.253 16:19:30 -- scripts/common.sh@365 -- # decimal 2 00:29:00.253 16:19:30 -- scripts/common.sh@352 -- # local d=2 00:29:00.253 16:19:30 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:00.253 16:19:30 -- scripts/common.sh@354 -- # echo 2 00:29:00.253 16:19:30 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:00.253 16:19:30 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:00.253 16:19:30 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:00.254 16:19:30 -- scripts/common.sh@367 -- # return 0 00:29:00.254 16:19:30 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:00.254 16:19:30 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:00.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.254 --rc genhtml_branch_coverage=1 00:29:00.254 --rc genhtml_function_coverage=1 00:29:00.254 --rc genhtml_legend=1 00:29:00.254 --rc geninfo_all_blocks=1 00:29:00.254 --rc geninfo_unexecuted_blocks=1 00:29:00.254 00:29:00.254 ' 00:29:00.254 16:19:30 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:00.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.254 --rc genhtml_branch_coverage=1 00:29:00.254 --rc genhtml_function_coverage=1 00:29:00.254 --rc genhtml_legend=1 00:29:00.254 --rc geninfo_all_blocks=1 00:29:00.254 --rc geninfo_unexecuted_blocks=1 00:29:00.254 00:29:00.254 ' 00:29:00.254 16:19:30 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:00.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.254 --rc genhtml_branch_coverage=1 00:29:00.254 --rc genhtml_function_coverage=1 00:29:00.254 --rc genhtml_legend=1 00:29:00.254 --rc geninfo_all_blocks=1 00:29:00.254 --rc geninfo_unexecuted_blocks=1 00:29:00.254 00:29:00.254 ' 00:29:00.254 16:19:30 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:00.254 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:00.254 --rc genhtml_branch_coverage=1 00:29:00.254 --rc genhtml_function_coverage=1 00:29:00.254 --rc genhtml_legend=1 00:29:00.254 --rc geninfo_all_blocks=1 00:29:00.254 --rc geninfo_unexecuted_blocks=1 00:29:00.254 00:29:00.254 ' 00:29:00.254 16:19:30 -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:00.254 16:19:30 -- nvmf/common.sh@7 -- # uname -s 00:29:00.254 16:19:30 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:00.254 16:19:30 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:00.254 16:19:30 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:00.254 16:19:30 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:00.254 16:19:30 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:00.254 16:19:30 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:00.254 16:19:30 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:00.254 16:19:30 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:00.254 16:19:30 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:00.254 16:19:30 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:00.254 16:19:30 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:00.254 16:19:30 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:00.254 16:19:30 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:00.254 16:19:30 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:00.254 16:19:30 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:00.254 16:19:30 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:00.254 16:19:30 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:00.254 16:19:30 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:00.254 16:19:30 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:00.254 16:19:30 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.254 16:19:30 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.254 16:19:30 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.254 16:19:30 -- paths/export.sh@5 -- # export PATH 00:29:00.254 16:19:30 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:00.254 16:19:30 -- nvmf/common.sh@46 -- # : 0 00:29:00.254 16:19:30 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:00.254 16:19:30 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:00.254 16:19:30 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:00.254 16:19:30 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:00.254 16:19:30 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:00.254 16:19:30 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:00.254 16:19:30 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:00.254 16:19:30 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:00.254 16:19:30 -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-phy-autotest/spdk/app/fio/nvme 00:29:00.254 16:19:30 -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:00.254 16:19:30 -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:00.254 16:19:30 -- host/target_disconnect.sh@77 -- # nvmftestinit 00:29:00.254 16:19:30 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:00.254 16:19:30 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.254 16:19:30 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:00.254 16:19:30 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:00.254 16:19:30 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:00.254 16:19:30 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.254 16:19:30 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:00.254 16:19:30 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.254 16:19:30 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:00.254 16:19:30 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:00.254 16:19:30 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:00.254 16:19:30 -- common/autotest_common.sh@10 -- # set +x 00:29:06.828 16:19:36 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:06.828 16:19:36 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:06.828 16:19:36 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:06.828 16:19:36 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:06.828 16:19:36 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:06.828 16:19:36 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:06.828 16:19:36 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:06.828 16:19:36 -- nvmf/common.sh@294 -- # net_devs=() 00:29:06.828 16:19:36 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:06.828 16:19:36 -- nvmf/common.sh@295 -- # e810=() 00:29:06.828 16:19:36 -- nvmf/common.sh@295 -- # local -ga e810 00:29:06.828 16:19:36 -- nvmf/common.sh@296 -- # x722=() 00:29:06.828 16:19:36 -- nvmf/common.sh@296 -- # local -ga x722 00:29:06.828 16:19:36 -- nvmf/common.sh@297 -- # mlx=() 00:29:06.828 16:19:36 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:06.828 16:19:36 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:06.828 16:19:36 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:06.828 16:19:36 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:06.828 16:19:36 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:06.828 16:19:36 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:06.828 16:19:36 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:06.828 16:19:36 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:06.828 16:19:36 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:06.828 16:19:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:06.828 16:19:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:06.828 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:06.828 16:19:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:06.828 16:19:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:06.828 16:19:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:06.828 16:19:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:06.829 16:19:36 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:06.829 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:06.829 16:19:36 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:06.829 16:19:36 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:06.829 16:19:36 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.829 16:19:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:06.829 16:19:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.829 16:19:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:06.829 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:06.829 16:19:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.829 16:19:36 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:06.829 16:19:36 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:06.829 16:19:36 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:06.829 16:19:36 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:06.829 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:06.829 16:19:36 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:06.829 16:19:36 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:06.829 16:19:36 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:06.829 16:19:36 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:06.829 16:19:36 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:06.829 16:19:36 -- nvmf/common.sh@57 -- # uname 00:29:06.829 16:19:36 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:06.829 16:19:36 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:06.829 16:19:36 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:06.829 16:19:36 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:06.829 16:19:36 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:06.829 16:19:36 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:06.829 16:19:36 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:06.829 16:19:36 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:06.829 16:19:36 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:06.829 16:19:36 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:06.829 16:19:36 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:06.829 16:19:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:06.829 16:19:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:06.829 16:19:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:06.829 16:19:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:06.829 16:19:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:06.829 16:19:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:06.829 16:19:36 -- nvmf/common.sh@104 -- # continue 2 00:29:06.829 16:19:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:06.829 16:19:36 -- nvmf/common.sh@104 -- # continue 2 00:29:06.829 16:19:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:06.829 16:19:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:06.829 16:19:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:06.829 16:19:36 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:06.829 16:19:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:06.829 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:06.829 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:06.829 altname enp217s0f0np0 00:29:06.829 altname ens818f0np0 00:29:06.829 inet 192.168.100.8/24 scope global mlx_0_0 00:29:06.829 valid_lft forever preferred_lft forever 00:29:06.829 16:19:36 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:06.829 16:19:36 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:06.829 16:19:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:06.829 16:19:36 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:06.829 16:19:36 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:06.829 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:06.829 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:06.829 altname enp217s0f1np1 00:29:06.829 altname ens818f1np1 00:29:06.829 inet 192.168.100.9/24 scope global mlx_0_1 00:29:06.829 valid_lft forever preferred_lft forever 00:29:06.829 16:19:36 -- nvmf/common.sh@410 -- # return 0 00:29:06.829 16:19:36 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:06.829 16:19:36 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:06.829 16:19:36 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:06.829 16:19:36 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:06.829 16:19:36 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:06.829 16:19:36 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:06.829 16:19:36 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:06.829 16:19:36 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:06.829 16:19:36 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:06.829 16:19:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:06.829 16:19:36 -- nvmf/common.sh@104 -- # continue 2 00:29:06.829 16:19:36 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:06.829 16:19:36 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:06.829 16:19:36 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:06.829 16:19:36 -- nvmf/common.sh@104 -- # continue 2 00:29:06.829 16:19:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:06.829 16:19:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:06.829 16:19:36 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:06.829 16:19:36 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:06.829 16:19:36 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:06.829 16:19:36 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:06.829 16:19:36 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:06.829 16:19:36 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:06.829 192.168.100.9' 00:29:06.829 16:19:36 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:06.829 192.168.100.9' 00:29:06.829 16:19:36 -- nvmf/common.sh@445 -- # head -n 1 00:29:06.829 16:19:36 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:06.829 16:19:36 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:06.829 192.168.100.9' 00:29:06.829 16:19:36 -- nvmf/common.sh@446 -- # tail -n +2 00:29:06.829 16:19:36 -- nvmf/common.sh@446 -- # head -n 1 00:29:06.829 16:19:36 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:06.829 16:19:36 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:06.829 16:19:36 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:06.829 16:19:36 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:06.829 16:19:36 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:06.829 16:19:36 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:06.829 16:19:36 -- host/target_disconnect.sh@78 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:06.829 16:19:36 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:06.829 16:19:36 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:06.829 16:19:36 -- common/autotest_common.sh@10 -- # set +x 00:29:06.829 ************************************ 00:29:06.829 START TEST nvmf_target_disconnect_tc1 00:29:06.829 ************************************ 00:29:06.829 16:19:36 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc1 00:29:06.829 16:19:36 -- host/target_disconnect.sh@32 -- # set +e 00:29:06.829 16:19:36 -- host/target_disconnect.sh@33 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:06.829 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.829 [2024-11-20 16:19:36.773086] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:06.829 [2024-11-20 16:19:36.773209] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:06.830 [2024-11-20 16:19:36.773254] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d70c0 00:29:07.090 [2024-11-20 16:19:37.777287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:07.090 [2024-11-20 16:19:37.777345] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:29:07.090 [2024-11-20 16:19:37.777377] nvme_ctrlr.c:4028:nvme_ctrlr_process_init: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr is in error state 00:29:07.090 [2024-11-20 16:19:37.777434] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:07.090 [2024-11-20 16:19:37.777463] nvme.c: 898:spdk_nvme_probe: *ERROR*: Create probe context failed 00:29:07.090 spdk_nvme_probe() failed for transport address '192.168.100.8' 00:29:07.090 /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:07.090 Initializing NVMe Controllers 00:29:07.090 16:19:37 -- host/target_disconnect.sh@33 -- # trap - ERR 00:29:07.090 16:19:37 -- host/target_disconnect.sh@33 -- # print_backtrace 00:29:07.090 16:19:37 -- common/autotest_common.sh@1142 -- # [[ hxBET =~ e ]] 00:29:07.090 16:19:37 -- common/autotest_common.sh@1142 -- # return 0 00:29:07.090 16:19:37 -- host/target_disconnect.sh@37 -- # '[' 1 '!=' 1 ']' 00:29:07.090 16:19:37 -- host/target_disconnect.sh@41 -- # set -e 00:29:07.090 00:29:07.090 real 0m1.132s 00:29:07.090 user 0m0.862s 00:29:07.090 sys 0m0.258s 00:29:07.090 16:19:37 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:07.090 16:19:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.090 ************************************ 00:29:07.090 END TEST nvmf_target_disconnect_tc1 00:29:07.090 ************************************ 00:29:07.090 16:19:37 -- host/target_disconnect.sh@79 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:07.090 16:19:37 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:07.090 16:19:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:07.090 16:19:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.090 ************************************ 00:29:07.090 START TEST nvmf_target_disconnect_tc2 00:29:07.090 ************************************ 00:29:07.090 16:19:37 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc2 00:29:07.090 16:19:37 -- host/target_disconnect.sh@45 -- # disconnect_init 192.168.100.8 00:29:07.090 16:19:37 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:07.090 16:19:37 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:07.090 16:19:37 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:07.090 16:19:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.090 16:19:37 -- nvmf/common.sh@469 -- # nvmfpid=1505365 00:29:07.090 16:19:37 -- nvmf/common.sh@470 -- # waitforlisten 1505365 00:29:07.090 16:19:37 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:07.090 16:19:37 -- common/autotest_common.sh@829 -- # '[' -z 1505365 ']' 00:29:07.090 16:19:37 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.090 16:19:37 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:07.090 16:19:37 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.090 16:19:37 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:07.090 16:19:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.351 [2024-11-20 16:19:37.897477] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:07.351 [2024-11-20 16:19:37.897531] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.351 EAL: No free 2048 kB hugepages reported on node 1 00:29:07.351 [2024-11-20 16:19:37.983871] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.351 [2024-11-20 16:19:38.021843] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:07.351 [2024-11-20 16:19:38.021968] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.351 [2024-11-20 16:19:38.021979] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.351 [2024-11-20 16:19:38.021991] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.351 [2024-11-20 16:19:38.022133] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:07.351 [2024-11-20 16:19:38.022242] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:07.351 [2024-11-20 16:19:38.022348] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:07.351 [2024-11-20 16:19:38.022350] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:07.920 16:19:38 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:07.920 16:19:38 -- common/autotest_common.sh@862 -- # return 0 00:29:07.920 16:19:38 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:07.920 16:19:38 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:07.920 16:19:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.180 16:19:38 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.180 16:19:38 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:08.180 16:19:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.180 16:19:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.180 Malloc0 00:29:08.180 16:19:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.180 16:19:38 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:08.180 16:19:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.180 16:19:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.180 [2024-11-20 16:19:38.811500] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1947ab0/0x1953580) succeed. 00:29:08.180 [2024-11-20 16:19:38.820915] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1949050/0x1994c20) succeed. 00:29:08.180 16:19:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.180 16:19:38 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:08.180 16:19:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.180 16:19:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.180 16:19:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.180 16:19:38 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:08.180 16:19:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.180 16:19:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.180 16:19:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.180 16:19:38 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:08.180 16:19:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.180 16:19:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.180 [2024-11-20 16:19:38.967107] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:08.180 16:19:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.180 16:19:38 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:08.180 16:19:38 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:08.180 16:19:38 -- common/autotest_common.sh@10 -- # set +x 00:29:08.180 16:19:38 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:08.180 16:19:38 -- host/target_disconnect.sh@50 -- # reconnectpid=1505535 00:29:08.180 16:19:38 -- host/target_disconnect.sh@52 -- # sleep 2 00:29:08.180 16:19:38 -- host/target_disconnect.sh@48 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420' 00:29:08.441 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.351 16:19:40 -- host/target_disconnect.sh@53 -- # kill -9 1505365 00:29:10.351 16:19:40 -- host/target_disconnect.sh@55 -- # sleep 2 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Write completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Write completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Write completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Write completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Write completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Write completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Write completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Write completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Write completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.735 Read completed with error (sct=0, sc=8) 00:29:11.735 starting I/O failed 00:29:11.736 Write completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 Write completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 Read completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 Read completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 Read completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 Write completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 Read completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 Write completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 Write completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 Read completed with error (sct=0, sc=8) 00:29:11.736 starting I/O failed 00:29:11.736 [2024-11-20 16:19:42.163395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:12.305 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 44: 1505365 Killed "${NVMF_APP[@]}" "$@" 00:29:12.305 16:19:42 -- host/target_disconnect.sh@56 -- # disconnect_init 192.168.100.8 00:29:12.305 16:19:42 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:12.305 16:19:42 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:12.305 16:19:42 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:12.305 16:19:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.305 16:19:42 -- nvmf/common.sh@469 -- # nvmfpid=1506208 00:29:12.305 16:19:43 -- nvmf/common.sh@470 -- # waitforlisten 1506208 00:29:12.305 16:19:43 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:12.305 16:19:43 -- common/autotest_common.sh@829 -- # '[' -z 1506208 ']' 00:29:12.305 16:19:43 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:12.305 16:19:43 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:12.305 16:19:43 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:12.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:12.305 16:19:43 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:12.305 16:19:43 -- common/autotest_common.sh@10 -- # set +x 00:29:12.305 [2024-11-20 16:19:43.043785] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:12.305 [2024-11-20 16:19:43.043835] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:12.305 EAL: No free 2048 kB hugepages reported on node 1 00:29:12.599 [2024-11-20 16:19:43.130758] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Read completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 Write completed with error (sct=0, sc=8) 00:29:12.599 starting I/O failed 00:29:12.599 [2024-11-20 16:19:43.166930] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:12.599 [2024-11-20 16:19:43.167038] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:12.599 [2024-11-20 16:19:43.167048] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:12.599 [2024-11-20 16:19:43.167057] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:12.599 [2024-11-20 16:19:43.167183] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:12.599 [2024-11-20 16:19:43.167292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:12.599 [2024-11-20 16:19:43.167400] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:12.599 [2024-11-20 16:19:43.167402] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:12.599 [2024-11-20 16:19:43.168423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:13.218 16:19:43 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:13.218 16:19:43 -- common/autotest_common.sh@862 -- # return 0 00:29:13.218 16:19:43 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:13.218 16:19:43 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:13.218 16:19:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.218 16:19:43 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:13.218 16:19:43 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:13.218 16:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.218 16:19:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.218 Malloc0 00:29:13.218 16:19:43 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.218 16:19:43 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:13.218 16:19:43 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.218 16:19:43 -- common/autotest_common.sh@10 -- # set +x 00:29:13.218 [2024-11-20 16:19:43.960034] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x127fab0/0x128b580) succeed. 00:29:13.218 [2024-11-20 16:19:43.969349] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1281050/0x12ccc20) succeed. 00:29:13.479 16:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.479 16:19:44 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:13.479 16:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.479 16:19:44 -- common/autotest_common.sh@10 -- # set +x 00:29:13.479 16:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.479 16:19:44 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:13.479 16:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.479 16:19:44 -- common/autotest_common.sh@10 -- # set +x 00:29:13.479 16:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.479 16:19:44 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.8 -s 4420 00:29:13.479 16:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.479 16:19:44 -- common/autotest_common.sh@10 -- # set +x 00:29:13.479 [2024-11-20 16:19:44.110154] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4420 *** 00:29:13.479 16:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.479 16:19:44 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.8 -s 4420 00:29:13.479 16:19:44 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.479 16:19:44 -- common/autotest_common.sh@10 -- # set +x 00:29:13.479 16:19:44 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.479 16:19:44 -- host/target_disconnect.sh@58 -- # wait 1505535 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Read completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 Write completed with error (sct=0, sc=8) 00:29:13.479 starting I/O failed 00:29:13.479 [2024-11-20 16:19:44.173537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.480 [2024-11-20 16:19:44.184101] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.480 [2024-11-20 16:19:44.184154] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.480 [2024-11-20 16:19:44.184174] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.480 [2024-11-20 16:19:44.184184] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.480 [2024-11-20 16:19:44.184194] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.480 [2024-11-20 16:19:44.194371] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-20 16:19:44.204189] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.480 [2024-11-20 16:19:44.204235] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.480 [2024-11-20 16:19:44.204256] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.480 [2024-11-20 16:19:44.204265] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.480 [2024-11-20 16:19:44.204274] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.480 [2024-11-20 16:19:44.214307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-20 16:19:44.224270] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.480 [2024-11-20 16:19:44.224306] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.480 [2024-11-20 16:19:44.224323] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.480 [2024-11-20 16:19:44.224332] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.480 [2024-11-20 16:19:44.224340] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.480 [2024-11-20 16:19:44.234412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-20 16:19:44.244254] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.480 [2024-11-20 16:19:44.244297] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.480 [2024-11-20 16:19:44.244317] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.480 [2024-11-20 16:19:44.244327] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.480 [2024-11-20 16:19:44.244336] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.480 [2024-11-20 16:19:44.254541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.480 [2024-11-20 16:19:44.264384] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.480 [2024-11-20 16:19:44.264423] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.480 [2024-11-20 16:19:44.264439] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.480 [2024-11-20 16:19:44.264448] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.480 [2024-11-20 16:19:44.264457] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.480 [2024-11-20 16:19:44.274580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.480 qpair failed and we were unable to recover it. 00:29:13.740 [2024-11-20 16:19:44.284403] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.740 [2024-11-20 16:19:44.284445] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.740 [2024-11-20 16:19:44.284462] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.740 [2024-11-20 16:19:44.284471] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.740 [2024-11-20 16:19:44.284480] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.740 [2024-11-20 16:19:44.294561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.740 qpair failed and we were unable to recover it. 00:29:13.740 [2024-11-20 16:19:44.304460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.740 [2024-11-20 16:19:44.304502] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.740 [2024-11-20 16:19:44.304524] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.740 [2024-11-20 16:19:44.304534] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.740 [2024-11-20 16:19:44.304543] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.740 [2024-11-20 16:19:44.314565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.740 qpair failed and we were unable to recover it. 00:29:13.740 [2024-11-20 16:19:44.324489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.740 [2024-11-20 16:19:44.324537] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.740 [2024-11-20 16:19:44.324554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.740 [2024-11-20 16:19:44.324563] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.740 [2024-11-20 16:19:44.324572] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.740 [2024-11-20 16:19:44.334882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.740 qpair failed and we were unable to recover it. 00:29:13.740 [2024-11-20 16:19:44.344635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.740 [2024-11-20 16:19:44.344674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.740 [2024-11-20 16:19:44.344691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.740 [2024-11-20 16:19:44.344700] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.344709] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.354886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:13.741 [2024-11-20 16:19:44.364604] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.741 [2024-11-20 16:19:44.364641] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.741 [2024-11-20 16:19:44.364658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.741 [2024-11-20 16:19:44.364667] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.364676] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.374958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:13.741 [2024-11-20 16:19:44.384743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.741 [2024-11-20 16:19:44.384787] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.741 [2024-11-20 16:19:44.384805] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.741 [2024-11-20 16:19:44.384814] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.384822] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.394927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:13.741 [2024-11-20 16:19:44.404734] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.741 [2024-11-20 16:19:44.404776] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.741 [2024-11-20 16:19:44.404793] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.741 [2024-11-20 16:19:44.404802] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.404810] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.414988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:13.741 [2024-11-20 16:19:44.424733] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.741 [2024-11-20 16:19:44.424773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.741 [2024-11-20 16:19:44.424789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.741 [2024-11-20 16:19:44.424798] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.424806] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.435121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:13.741 [2024-11-20 16:19:44.444832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.741 [2024-11-20 16:19:44.444874] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.741 [2024-11-20 16:19:44.444890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.741 [2024-11-20 16:19:44.444899] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.444907] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.455166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:13.741 [2024-11-20 16:19:44.464909] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.741 [2024-11-20 16:19:44.464947] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.741 [2024-11-20 16:19:44.464963] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.741 [2024-11-20 16:19:44.464976] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.464984] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.475239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:13.741 [2024-11-20 16:19:44.485112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.741 [2024-11-20 16:19:44.485156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.741 [2024-11-20 16:19:44.485172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.741 [2024-11-20 16:19:44.485181] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.485190] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.495322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:13.741 [2024-11-20 16:19:44.504954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.741 [2024-11-20 16:19:44.505000] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.741 [2024-11-20 16:19:44.505017] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.741 [2024-11-20 16:19:44.505026] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.505035] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.515358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:13.741 [2024-11-20 16:19:44.525045] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:13.741 [2024-11-20 16:19:44.525091] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:13.741 [2024-11-20 16:19:44.525108] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:13.741 [2024-11-20 16:19:44.525117] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:13.741 [2024-11-20 16:19:44.525126] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:13.741 [2024-11-20 16:19:44.535357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:13.741 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.545065] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.003 [2024-11-20 16:19:44.545110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.003 [2024-11-20 16:19:44.545126] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.003 [2024-11-20 16:19:44.545136] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.003 [2024-11-20 16:19:44.545145] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.003 [2024-11-20 16:19:44.555381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.003 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.565262] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.003 [2024-11-20 16:19:44.565303] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.003 [2024-11-20 16:19:44.565320] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.003 [2024-11-20 16:19:44.565329] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.003 [2024-11-20 16:19:44.565337] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.003 [2024-11-20 16:19:44.575568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.003 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.585362] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.003 [2024-11-20 16:19:44.585399] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.003 [2024-11-20 16:19:44.585415] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.003 [2024-11-20 16:19:44.585424] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.003 [2024-11-20 16:19:44.585432] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.003 [2024-11-20 16:19:44.595592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.003 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.605325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.003 [2024-11-20 16:19:44.605367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.003 [2024-11-20 16:19:44.605383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.003 [2024-11-20 16:19:44.605392] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.003 [2024-11-20 16:19:44.605400] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.003 [2024-11-20 16:19:44.615789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.003 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.625561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.003 [2024-11-20 16:19:44.625596] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.003 [2024-11-20 16:19:44.625612] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.003 [2024-11-20 16:19:44.625621] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.003 [2024-11-20 16:19:44.625630] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.003 [2024-11-20 16:19:44.635911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.003 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.645477] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.003 [2024-11-20 16:19:44.645522] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.003 [2024-11-20 16:19:44.645542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.003 [2024-11-20 16:19:44.645551] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.003 [2024-11-20 16:19:44.645560] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.003 [2024-11-20 16:19:44.656039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.003 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.665553] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.003 [2024-11-20 16:19:44.665599] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.003 [2024-11-20 16:19:44.665616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.003 [2024-11-20 16:19:44.665625] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.003 [2024-11-20 16:19:44.665634] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.003 [2024-11-20 16:19:44.675983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.003 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.685690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.003 [2024-11-20 16:19:44.685739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.003 [2024-11-20 16:19:44.685755] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.003 [2024-11-20 16:19:44.685764] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.003 [2024-11-20 16:19:44.685773] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.003 [2024-11-20 16:19:44.695935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.003 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.705617] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.003 [2024-11-20 16:19:44.705652] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.003 [2024-11-20 16:19:44.705669] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.003 [2024-11-20 16:19:44.705678] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.003 [2024-11-20 16:19:44.705686] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.003 [2024-11-20 16:19:44.716210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.003 qpair failed and we were unable to recover it. 00:29:14.003 [2024-11-20 16:19:44.725689] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.004 [2024-11-20 16:19:44.725730] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.004 [2024-11-20 16:19:44.725747] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.004 [2024-11-20 16:19:44.725756] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.004 [2024-11-20 16:19:44.725764] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.004 [2024-11-20 16:19:44.736282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.004 qpair failed and we were unable to recover it. 00:29:14.004 [2024-11-20 16:19:44.745808] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.004 [2024-11-20 16:19:44.745851] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.004 [2024-11-20 16:19:44.745868] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.004 [2024-11-20 16:19:44.745877] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.004 [2024-11-20 16:19:44.745886] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.004 [2024-11-20 16:19:44.756237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.004 qpair failed and we were unable to recover it. 00:29:14.004 [2024-11-20 16:19:44.765906] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.004 [2024-11-20 16:19:44.765948] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.004 [2024-11-20 16:19:44.765965] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.004 [2024-11-20 16:19:44.765974] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.004 [2024-11-20 16:19:44.765982] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.004 [2024-11-20 16:19:44.776541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.004 qpair failed and we were unable to recover it. 00:29:14.004 [2024-11-20 16:19:44.785937] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.004 [2024-11-20 16:19:44.785980] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.004 [2024-11-20 16:19:44.785996] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.004 [2024-11-20 16:19:44.786005] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.004 [2024-11-20 16:19:44.786014] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.004 [2024-11-20 16:19:44.796380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.004 qpair failed and we were unable to recover it. 00:29:14.004 [2024-11-20 16:19:44.805943] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.004 [2024-11-20 16:19:44.805986] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.004 [2024-11-20 16:19:44.806003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.004 [2024-11-20 16:19:44.806012] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.004 [2024-11-20 16:19:44.806020] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.816376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:44.825998] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:44.826045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:44.826062] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:44.826071] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:44.826079] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.836466] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:44.845994] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:44.846035] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:44.846052] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:44.846061] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:44.846069] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.856572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:44.866184] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:44.866226] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:44.866242] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:44.866251] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:44.866260] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.876573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:44.886126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:44.886167] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:44.886183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:44.886192] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:44.886201] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.896813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:44.906279] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:44.906321] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:44.906338] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:44.906348] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:44.906359] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.916774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:44.926267] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:44.926315] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:44.926331] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:44.926340] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:44.926348] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.936842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:44.946397] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:44.946437] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:44.946454] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:44.946462] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:44.946471] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.956928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:44.966427] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:44.966468] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:44.966484] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:44.966493] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:44.966501] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.976789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:44.986417] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:44.986461] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:44.986476] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:44.986486] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:44.986494] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:44.996930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:45.006530] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.265 [2024-11-20 16:19:45.006574] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.265 [2024-11-20 16:19:45.006591] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.265 [2024-11-20 16:19:45.006600] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.265 [2024-11-20 16:19:45.006610] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.265 [2024-11-20 16:19:45.017023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.265 qpair failed and we were unable to recover it. 00:29:14.265 [2024-11-20 16:19:45.026642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.266 [2024-11-20 16:19:45.026680] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.266 [2024-11-20 16:19:45.026698] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.266 [2024-11-20 16:19:45.026707] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.266 [2024-11-20 16:19:45.026716] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.266 [2024-11-20 16:19:45.037221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-11-20 16:19:45.046751] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.266 [2024-11-20 16:19:45.046793] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.266 [2024-11-20 16:19:45.046809] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.266 [2024-11-20 16:19:45.046819] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.266 [2024-11-20 16:19:45.046827] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.266 [2024-11-20 16:19:45.056975] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.266 qpair failed and we were unable to recover it. 00:29:14.266 [2024-11-20 16:19:45.066460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.266 [2024-11-20 16:19:45.066506] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.266 [2024-11-20 16:19:45.066535] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.266 [2024-11-20 16:19:45.066545] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.266 [2024-11-20 16:19:45.066554] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.526 [2024-11-20 16:19:45.077247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.526 qpair failed and we were unable to recover it. 00:29:14.526 [2024-11-20 16:19:45.086822] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.526 [2024-11-20 16:19:45.086868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.526 [2024-11-20 16:19:45.086889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.526 [2024-11-20 16:19:45.086898] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.526 [2024-11-20 16:19:45.086907] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.526 [2024-11-20 16:19:45.097420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.106832] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.106876] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.106892] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.106901] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.106910] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.117094] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.126853] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.126892] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.126909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.126918] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.126926] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.137312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.146902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.146945] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.146964] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.146974] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.146982] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.157397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.167067] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.167103] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.167120] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.167129] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.167137] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.177418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.187072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.187107] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.187124] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.187133] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.187141] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.197515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.207153] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.207194] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.207211] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.207220] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.207229] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.217595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.227139] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.227183] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.227200] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.227210] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.227219] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.237642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.247215] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.247251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.247267] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.247277] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.247285] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.257676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.267233] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.267276] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.267296] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.267305] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.267314] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.277755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.287331] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.287369] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.287386] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.287395] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.287403] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.297629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.527 [2024-11-20 16:19:45.307356] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.527 [2024-11-20 16:19:45.307400] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.527 [2024-11-20 16:19:45.307416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.527 [2024-11-20 16:19:45.307425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.527 [2024-11-20 16:19:45.307433] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.527 [2024-11-20 16:19:45.317801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.527 qpair failed and we were unable to recover it. 00:29:14.528 [2024-11-20 16:19:45.327408] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.528 [2024-11-20 16:19:45.327454] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.528 [2024-11-20 16:19:45.327470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.528 [2024-11-20 16:19:45.327479] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.528 [2024-11-20 16:19:45.327488] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.789 [2024-11-20 16:19:45.337813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.789 qpair failed and we were unable to recover it. 00:29:14.789 [2024-11-20 16:19:45.347500] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.789 [2024-11-20 16:19:45.347538] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.789 [2024-11-20 16:19:45.347554] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.789 [2024-11-20 16:19:45.347563] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.789 [2024-11-20 16:19:45.347575] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.789 [2024-11-20 16:19:45.358040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.789 qpair failed and we were unable to recover it. 00:29:14.789 [2024-11-20 16:19:45.367644] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.789 [2024-11-20 16:19:45.367686] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.789 [2024-11-20 16:19:45.367702] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.789 [2024-11-20 16:19:45.367711] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.789 [2024-11-20 16:19:45.367720] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.789 [2024-11-20 16:19:45.378034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.789 qpair failed and we were unable to recover it. 00:29:14.789 [2024-11-20 16:19:45.387632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.789 [2024-11-20 16:19:45.387674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.789 [2024-11-20 16:19:45.387691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.789 [2024-11-20 16:19:45.387701] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.789 [2024-11-20 16:19:45.387709] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.789 [2024-11-20 16:19:45.398237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.789 qpair failed and we were unable to recover it. 00:29:14.789 [2024-11-20 16:19:45.407642] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.789 [2024-11-20 16:19:45.407684] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.789 [2024-11-20 16:19:45.407700] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.407709] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.407718] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.790 [2024-11-20 16:19:45.418207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.790 qpair failed and we were unable to recover it. 00:29:14.790 [2024-11-20 16:19:45.427632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.790 [2024-11-20 16:19:45.427669] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.790 [2024-11-20 16:19:45.427686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.427695] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.427703] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.790 [2024-11-20 16:19:45.437904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.790 qpair failed and we were unable to recover it. 00:29:14.790 [2024-11-20 16:19:45.447744] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.790 [2024-11-20 16:19:45.447784] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.790 [2024-11-20 16:19:45.447800] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.447810] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.447818] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.790 [2024-11-20 16:19:45.458192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.790 qpair failed and we were unable to recover it. 00:29:14.790 [2024-11-20 16:19:45.467821] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.790 [2024-11-20 16:19:45.467867] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.790 [2024-11-20 16:19:45.467884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.467893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.467902] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.790 [2024-11-20 16:19:45.478319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.790 qpair failed and we were unable to recover it. 00:29:14.790 [2024-11-20 16:19:45.487889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.790 [2024-11-20 16:19:45.487937] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.790 [2024-11-20 16:19:45.487953] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.487962] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.487971] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.790 [2024-11-20 16:19:45.498406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.790 qpair failed and we were unable to recover it. 00:29:14.790 [2024-11-20 16:19:45.508032] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.790 [2024-11-20 16:19:45.508072] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.790 [2024-11-20 16:19:45.508089] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.508099] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.508107] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.790 [2024-11-20 16:19:45.518152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.790 qpair failed and we were unable to recover it. 00:29:14.790 [2024-11-20 16:19:45.528010] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.790 [2024-11-20 16:19:45.528051] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.790 [2024-11-20 16:19:45.528068] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.528080] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.528089] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.790 [2024-11-20 16:19:45.538440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.790 qpair failed and we were unable to recover it. 00:29:14.790 [2024-11-20 16:19:45.547971] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.790 [2024-11-20 16:19:45.548013] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.790 [2024-11-20 16:19:45.548029] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.548038] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.548047] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.790 [2024-11-20 16:19:45.558438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.790 qpair failed and we were unable to recover it. 00:29:14.790 [2024-11-20 16:19:45.568152] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.790 [2024-11-20 16:19:45.568187] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.790 [2024-11-20 16:19:45.568204] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.568213] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.568221] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:14.790 [2024-11-20 16:19:45.578630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:14.790 qpair failed and we were unable to recover it. 00:29:14.790 [2024-11-20 16:19:45.588178] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:14.790 [2024-11-20 16:19:45.588220] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:14.790 [2024-11-20 16:19:45.588237] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:14.790 [2024-11-20 16:19:45.588246] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:14.790 [2024-11-20 16:19:45.588255] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.052 [2024-11-20 16:19:45.598447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-20 16:19:45.608286] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.052 [2024-11-20 16:19:45.608329] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.052 [2024-11-20 16:19:45.608345] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.052 [2024-11-20 16:19:45.608354] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.052 [2024-11-20 16:19:45.608363] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.052 [2024-11-20 16:19:45.618695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-20 16:19:45.628255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.052 [2024-11-20 16:19:45.628305] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.052 [2024-11-20 16:19:45.628322] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.052 [2024-11-20 16:19:45.628332] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.052 [2024-11-20 16:19:45.628341] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.052 [2024-11-20 16:19:45.638508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-20 16:19:45.648386] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.052 [2024-11-20 16:19:45.648425] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.052 [2024-11-20 16:19:45.648441] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.052 [2024-11-20 16:19:45.648450] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.052 [2024-11-20 16:19:45.648459] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.052 [2024-11-20 16:19:45.658572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-20 16:19:45.668446] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.052 [2024-11-20 16:19:45.668482] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.052 [2024-11-20 16:19:45.668499] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.052 [2024-11-20 16:19:45.668508] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.052 [2024-11-20 16:19:45.668521] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.052 [2024-11-20 16:19:45.678837] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-20 16:19:45.688521] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.052 [2024-11-20 16:19:45.688561] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.052 [2024-11-20 16:19:45.688577] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.052 [2024-11-20 16:19:45.688586] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.052 [2024-11-20 16:19:45.688595] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.052 [2024-11-20 16:19:45.698828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.052 qpair failed and we were unable to recover it. 00:29:15.052 [2024-11-20 16:19:45.708396] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.053 [2024-11-20 16:19:45.708439] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.053 [2024-11-20 16:19:45.708458] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.053 [2024-11-20 16:19:45.708467] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.053 [2024-11-20 16:19:45.708476] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.053 [2024-11-20 16:19:45.718937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-20 16:19:45.728543] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.053 [2024-11-20 16:19:45.728584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.053 [2024-11-20 16:19:45.728600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.053 [2024-11-20 16:19:45.728609] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.053 [2024-11-20 16:19:45.728618] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.053 [2024-11-20 16:19:45.739011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-20 16:19:45.748698] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.053 [2024-11-20 16:19:45.748740] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.053 [2024-11-20 16:19:45.748757] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.053 [2024-11-20 16:19:45.748766] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.053 [2024-11-20 16:19:45.748774] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.053 [2024-11-20 16:19:45.758893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-20 16:19:45.768649] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.053 [2024-11-20 16:19:45.768688] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.053 [2024-11-20 16:19:45.768704] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.053 [2024-11-20 16:19:45.768713] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.053 [2024-11-20 16:19:45.768721] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.053 [2024-11-20 16:19:45.779208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-20 16:19:45.788778] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.053 [2024-11-20 16:19:45.788824] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.053 [2024-11-20 16:19:45.788840] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.053 [2024-11-20 16:19:45.788849] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.053 [2024-11-20 16:19:45.788861] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.053 [2024-11-20 16:19:45.799077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-20 16:19:45.808807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.053 [2024-11-20 16:19:45.808844] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.053 [2024-11-20 16:19:45.808861] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.053 [2024-11-20 16:19:45.808870] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.053 [2024-11-20 16:19:45.808878] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.053 [2024-11-20 16:19:45.819171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-20 16:19:45.828793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.053 [2024-11-20 16:19:45.828831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.053 [2024-11-20 16:19:45.828847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.053 [2024-11-20 16:19:45.828856] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.053 [2024-11-20 16:19:45.828865] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.053 [2024-11-20 16:19:45.839134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.053 qpair failed and we were unable to recover it. 00:29:15.053 [2024-11-20 16:19:45.848977] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.053 [2024-11-20 16:19:45.849018] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.053 [2024-11-20 16:19:45.849034] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.053 [2024-11-20 16:19:45.849043] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.053 [2024-11-20 16:19:45.849052] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:45.859550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:45.869025] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:45.869071] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-11-20 16:19:45.869087] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-11-20 16:19:45.869097] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-11-20 16:19:45.869105] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:45.879312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:45.889166] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:45.889213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-11-20 16:19:45.889229] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-11-20 16:19:45.889238] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-11-20 16:19:45.889247] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:45.899425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:45.909164] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:45.909208] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-11-20 16:19:45.909224] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-11-20 16:19:45.909232] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-11-20 16:19:45.909241] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:45.919307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:45.929284] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:45.929324] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-11-20 16:19:45.929341] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-11-20 16:19:45.929350] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-11-20 16:19:45.929359] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:45.939400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:45.949290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:45.949335] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-11-20 16:19:45.949351] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-11-20 16:19:45.949360] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-11-20 16:19:45.949369] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:45.959518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:45.969416] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:45.969462] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-11-20 16:19:45.969478] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-11-20 16:19:45.969491] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-11-20 16:19:45.969500] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:45.979704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:45.989379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:45.989418] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-11-20 16:19:45.989434] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-11-20 16:19:45.989444] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-11-20 16:19:45.989452] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:45.999642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:46.009558] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:46.009598] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-11-20 16:19:46.009614] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-11-20 16:19:46.009623] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-11-20 16:19:46.009632] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:46.019703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:46.029481] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:46.029534] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.315 [2024-11-20 16:19:46.029552] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.315 [2024-11-20 16:19:46.029561] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.315 [2024-11-20 16:19:46.029570] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.315 [2024-11-20 16:19:46.039855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.315 qpair failed and we were unable to recover it. 00:29:15.315 [2024-11-20 16:19:46.049690] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.315 [2024-11-20 16:19:46.049727] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.316 [2024-11-20 16:19:46.049744] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.316 [2024-11-20 16:19:46.049753] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.316 [2024-11-20 16:19:46.049761] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.316 [2024-11-20 16:19:46.060000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.316 qpair failed and we were unable to recover it. 00:29:15.316 [2024-11-20 16:19:46.069681] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.316 [2024-11-20 16:19:46.069720] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.316 [2024-11-20 16:19:46.069737] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.316 [2024-11-20 16:19:46.069746] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.316 [2024-11-20 16:19:46.069754] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.316 [2024-11-20 16:19:46.079905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.316 qpair failed and we were unable to recover it. 00:29:15.316 [2024-11-20 16:19:46.089732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.316 [2024-11-20 16:19:46.089773] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.316 [2024-11-20 16:19:46.089789] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.316 [2024-11-20 16:19:46.089798] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.316 [2024-11-20 16:19:46.089807] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.316 [2024-11-20 16:19:46.100206] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.316 qpair failed and we were unable to recover it. 00:29:15.316 [2024-11-20 16:19:46.109793] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.316 [2024-11-20 16:19:46.109839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.316 [2024-11-20 16:19:46.109855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.316 [2024-11-20 16:19:46.109864] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.316 [2024-11-20 16:19:46.109873] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.576 [2024-11-20 16:19:46.120179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.576 qpair failed and we were unable to recover it. 00:29:15.576 [2024-11-20 16:19:46.129826] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.576 [2024-11-20 16:19:46.129868] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.576 [2024-11-20 16:19:46.129884] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.576 [2024-11-20 16:19:46.129893] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.576 [2024-11-20 16:19:46.129901] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.576 [2024-11-20 16:19:46.140083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.576 qpair failed and we were unable to recover it. 00:29:15.576 [2024-11-20 16:19:46.149784] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.576 [2024-11-20 16:19:46.149827] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.576 [2024-11-20 16:19:46.149847] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.576 [2024-11-20 16:19:46.149856] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.576 [2024-11-20 16:19:46.149865] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.576 [2024-11-20 16:19:46.160301] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.576 qpair failed and we were unable to recover it. 00:29:15.576 [2024-11-20 16:19:46.169954] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.576 [2024-11-20 16:19:46.169995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.576 [2024-11-20 16:19:46.170012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.576 [2024-11-20 16:19:46.170021] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.576 [2024-11-20 16:19:46.170030] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.576 [2024-11-20 16:19:46.180214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.576 qpair failed and we were unable to recover it. 00:29:15.576 [2024-11-20 16:19:46.190097] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.576 [2024-11-20 16:19:46.190137] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.576 [2024-11-20 16:19:46.190154] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.190163] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.190172] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.577 [2024-11-20 16:19:46.200336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.577 qpair failed and we were unable to recover it. 00:29:15.577 [2024-11-20 16:19:46.210145] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.577 [2024-11-20 16:19:46.210189] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.577 [2024-11-20 16:19:46.210205] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.210214] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.210223] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.577 [2024-11-20 16:19:46.220497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.577 qpair failed and we were unable to recover it. 00:29:15.577 [2024-11-20 16:19:46.230072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.577 [2024-11-20 16:19:46.230111] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.577 [2024-11-20 16:19:46.230128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.230137] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.230145] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.577 [2024-11-20 16:19:46.240554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.577 qpair failed and we were unable to recover it. 00:29:15.577 [2024-11-20 16:19:46.250188] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.577 [2024-11-20 16:19:46.250228] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.577 [2024-11-20 16:19:46.250245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.250255] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.250263] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.577 [2024-11-20 16:19:46.260497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.577 qpair failed and we were unable to recover it. 00:29:15.577 [2024-11-20 16:19:46.270285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.577 [2024-11-20 16:19:46.270327] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.577 [2024-11-20 16:19:46.270343] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.270353] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.270361] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.577 [2024-11-20 16:19:46.280620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.577 qpair failed and we were unable to recover it. 00:29:15.577 [2024-11-20 16:19:46.290447] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.577 [2024-11-20 16:19:46.290485] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.577 [2024-11-20 16:19:46.290501] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.290510] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.290524] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.577 [2024-11-20 16:19:46.300644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.577 qpair failed and we were unable to recover it. 00:29:15.577 [2024-11-20 16:19:46.310341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.577 [2024-11-20 16:19:46.310382] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.577 [2024-11-20 16:19:46.310398] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.310408] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.310416] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.577 [2024-11-20 16:19:46.320713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.577 qpair failed and we were unable to recover it. 00:29:15.577 [2024-11-20 16:19:46.330440] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.577 [2024-11-20 16:19:46.330481] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.577 [2024-11-20 16:19:46.330497] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.330507] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.330515] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.577 [2024-11-20 16:19:46.340647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.577 qpair failed and we were unable to recover it. 00:29:15.577 [2024-11-20 16:19:46.350395] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.577 [2024-11-20 16:19:46.350436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.577 [2024-11-20 16:19:46.350453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.350462] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.350471] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.577 [2024-11-20 16:19:46.360828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.577 qpair failed and we were unable to recover it. 00:29:15.577 [2024-11-20 16:19:46.370587] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.577 [2024-11-20 16:19:46.370630] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.577 [2024-11-20 16:19:46.370647] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.577 [2024-11-20 16:19:46.370656] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.577 [2024-11-20 16:19:46.370664] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.380943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-11-20 16:19:46.390564] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.838 [2024-11-20 16:19:46.390605] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.838 [2024-11-20 16:19:46.390622] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.838 [2024-11-20 16:19:46.390631] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.838 [2024-11-20 16:19:46.390640] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.400834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-11-20 16:19:46.410592] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.838 [2024-11-20 16:19:46.410634] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.838 [2024-11-20 16:19:46.410650] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.838 [2024-11-20 16:19:46.410662] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.838 [2024-11-20 16:19:46.410671] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.421090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-11-20 16:19:46.430613] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.838 [2024-11-20 16:19:46.430657] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.838 [2024-11-20 16:19:46.430673] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.838 [2024-11-20 16:19:46.430682] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.838 [2024-11-20 16:19:46.430690] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.441067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-11-20 16:19:46.450803] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.838 [2024-11-20 16:19:46.450839] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.838 [2024-11-20 16:19:46.450855] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.838 [2024-11-20 16:19:46.450864] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.838 [2024-11-20 16:19:46.450873] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.461219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-11-20 16:19:46.470874] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.838 [2024-11-20 16:19:46.470920] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.838 [2024-11-20 16:19:46.470936] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.838 [2024-11-20 16:19:46.470946] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.838 [2024-11-20 16:19:46.470954] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.481096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-11-20 16:19:46.490889] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.838 [2024-11-20 16:19:46.490929] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.838 [2024-11-20 16:19:46.490945] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.838 [2024-11-20 16:19:46.490954] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.838 [2024-11-20 16:19:46.490963] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.501190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-11-20 16:19:46.510973] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.838 [2024-11-20 16:19:46.511019] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.838 [2024-11-20 16:19:46.511035] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.838 [2024-11-20 16:19:46.511044] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.838 [2024-11-20 16:19:46.511053] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.521269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-11-20 16:19:46.531072] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.838 [2024-11-20 16:19:46.531117] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.838 [2024-11-20 16:19:46.531133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.838 [2024-11-20 16:19:46.531142] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.838 [2024-11-20 16:19:46.531151] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.541459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.838 [2024-11-20 16:19:46.551047] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.838 [2024-11-20 16:19:46.551084] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.838 [2024-11-20 16:19:46.551100] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.838 [2024-11-20 16:19:46.551109] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.838 [2024-11-20 16:19:46.551117] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.838 [2024-11-20 16:19:46.561391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.838 qpair failed and we were unable to recover it. 00:29:15.839 [2024-11-20 16:19:46.571134] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.839 [2024-11-20 16:19:46.571173] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.839 [2024-11-20 16:19:46.571190] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.839 [2024-11-20 16:19:46.571199] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.839 [2024-11-20 16:19:46.571207] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.839 [2024-11-20 16:19:46.581457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-11-20 16:19:46.591207] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.839 [2024-11-20 16:19:46.591251] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.839 [2024-11-20 16:19:46.591270] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.839 [2024-11-20 16:19:46.591280] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.839 [2024-11-20 16:19:46.591288] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.839 [2024-11-20 16:19:46.601669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-11-20 16:19:46.611312] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.839 [2024-11-20 16:19:46.611353] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.839 [2024-11-20 16:19:46.611370] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.839 [2024-11-20 16:19:46.611379] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.839 [2024-11-20 16:19:46.611387] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:15.839 [2024-11-20 16:19:46.621619] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:15.839 qpair failed and we were unable to recover it. 00:29:15.839 [2024-11-20 16:19:46.631269] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:15.839 [2024-11-20 16:19:46.631310] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:15.839 [2024-11-20 16:19:46.631326] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:15.839 [2024-11-20 16:19:46.631335] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:15.839 [2024-11-20 16:19:46.631344] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.641629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.651290] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.651330] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.651346] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.651356] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.651364] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.661591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.671414] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.671457] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.671473] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.671482] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.671490] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.681770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.691379] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.691422] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.691438] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.691447] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.691456] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.701674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.711571] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.711606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.711623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.711632] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.711641] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.722000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.731536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.731586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.731603] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.731612] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.731621] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.742057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.751568] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.751606] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.751623] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.751632] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.751641] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.761964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.771755] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.771798] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.771815] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.771824] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.771833] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.782122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.791724] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.791766] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.791782] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.791791] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.791800] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.802082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.811819] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.811863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.811879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.811888] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.811896] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.822326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.831939] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.831978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.831995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.832004] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.832012] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.842354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.852017] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.852057] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.852073] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.852082] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.852095] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.862325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.872021] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.872060] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.872077] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.872086] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.872095] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.882436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.100 [2024-11-20 16:19:46.892093] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.100 [2024-11-20 16:19:46.892135] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.100 [2024-11-20 16:19:46.892151] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.100 [2024-11-20 16:19:46.892160] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.100 [2024-11-20 16:19:46.892169] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.100 [2024-11-20 16:19:46.902561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.100 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:46.912113] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:46.912156] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:46.912172] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:46.912182] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:46.912190] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:46.922574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:46.932126] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:46.932166] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:46.932183] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:46.932192] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:46.932201] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:46.942681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:46.952173] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:46.952214] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:46.952232] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:46.952241] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:46.952249] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:46.962727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:46.972285] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:46.972326] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:46.972342] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:46.972351] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:46.972360] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:46.982672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:46.992337] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:46.992380] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:46.992397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:46.992406] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:46.992415] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:47.002881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:47.012489] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:47.012528] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:47.012544] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:47.012553] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:47.012562] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:47.022919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:47.032424] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:47.032460] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:47.032479] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:47.032488] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:47.032497] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:47.043007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:47.052576] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:47.052618] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:47.052634] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:47.052644] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:47.052652] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:47.063043] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:47.072559] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:47.072602] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:47.072618] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:47.072634] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:47.072643] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:47.083161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:47.092694] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:47.092739] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:47.092756] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:47.092765] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:47.092773] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:47.103028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:47.112732] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:47.112769] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:47.112787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:47.112796] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:47.112804] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:47.123185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:47.132729] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:47.132770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:47.132787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:47.132796] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:47.132805] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:47.143291] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.361 [2024-11-20 16:19:47.152917] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.361 [2024-11-20 16:19:47.152956] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.361 [2024-11-20 16:19:47.152973] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.361 [2024-11-20 16:19:47.152982] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.361 [2024-11-20 16:19:47.152991] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.361 [2024-11-20 16:19:47.163307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.361 qpair failed and we were unable to recover it. 00:29:16.623 [2024-11-20 16:19:47.173014] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.623 [2024-11-20 16:19:47.173064] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.623 [2024-11-20 16:19:47.173081] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.623 [2024-11-20 16:19:47.173090] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.623 [2024-11-20 16:19:47.173098] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.623 [2024-11-20 16:19:47.183246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.623 qpair failed and we were unable to recover it. 00:29:16.623 [2024-11-20 16:19:47.192966] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.623 [2024-11-20 16:19:47.193006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.623 [2024-11-20 16:19:47.193022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.623 [2024-11-20 16:19:47.193032] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.623 [2024-11-20 16:19:47.193040] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.623 [2024-11-20 16:19:47.203468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.623 qpair failed and we were unable to recover it. 00:29:16.623 [2024-11-20 16:19:47.213023] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.623 [2024-11-20 16:19:47.213062] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.623 [2024-11-20 16:19:47.213082] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.213091] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.213099] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.223462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.233084] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.233125] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.233142] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.233151] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.233160] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.243607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.253132] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.253170] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.253187] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.253196] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.253204] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.263390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.273255] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.273299] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.273315] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.273324] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.273332] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.283792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.293242] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.293281] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.293297] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.293306] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.293318] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.303717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.313324] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.313366] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.313383] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.313391] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.313400] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.323824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.333399] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.333436] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.333453] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.333463] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.333472] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.343830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.353474] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.353524] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.353542] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.353552] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.353561] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.363869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.373531] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.373572] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.373588] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.373597] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.373606] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.384018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.393567] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.393610] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.393627] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.393637] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.393645] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.403977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.624 [2024-11-20 16:19:47.413615] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.624 [2024-11-20 16:19:47.413660] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.624 [2024-11-20 16:19:47.413677] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.624 [2024-11-20 16:19:47.413686] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.624 [2024-11-20 16:19:47.413695] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.624 [2024-11-20 16:19:47.424104] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.624 qpair failed and we were unable to recover it. 00:29:16.885 [2024-11-20 16:19:47.433665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.885 [2024-11-20 16:19:47.433706] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.885 [2024-11-20 16:19:47.433723] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.885 [2024-11-20 16:19:47.433733] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.885 [2024-11-20 16:19:47.433742] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.885 [2024-11-20 16:19:47.444170] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.885 qpair failed and we were unable to recover it. 00:29:16.885 [2024-11-20 16:19:47.453833] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.885 [2024-11-20 16:19:47.453873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.885 [2024-11-20 16:19:47.453890] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.885 [2024-11-20 16:19:47.453899] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.885 [2024-11-20 16:19:47.453908] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.885 [2024-11-20 16:19:47.464280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.885 qpair failed and we were unable to recover it. 00:29:16.885 [2024-11-20 16:19:47.473936] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.885 [2024-11-20 16:19:47.473978] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.885 [2024-11-20 16:19:47.473995] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.885 [2024-11-20 16:19:47.474008] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.885 [2024-11-20 16:19:47.474017] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.885 [2024-11-20 16:19:47.484346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.885 qpair failed and we were unable to recover it. 00:29:16.885 [2024-11-20 16:19:47.493951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.885 [2024-11-20 16:19:47.493987] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.885 [2024-11-20 16:19:47.494003] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.494012] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.494021] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.504227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:16.886 [2024-11-20 16:19:47.513990] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.886 [2024-11-20 16:19:47.514027] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.886 [2024-11-20 16:19:47.514043] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.514053] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.514061] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.524526] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:16.886 [2024-11-20 16:19:47.534034] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.886 [2024-11-20 16:19:47.534076] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.886 [2024-11-20 16:19:47.534092] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.534101] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.534110] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.544413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:16.886 [2024-11-20 16:19:47.554174] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.886 [2024-11-20 16:19:47.554212] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.886 [2024-11-20 16:19:47.554228] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.554237] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.554245] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.564561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:16.886 [2024-11-20 16:19:47.574220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.886 [2024-11-20 16:19:47.574262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.886 [2024-11-20 16:19:47.574279] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.574288] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.574296] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.584700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:16.886 [2024-11-20 16:19:47.594277] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.886 [2024-11-20 16:19:47.594313] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.886 [2024-11-20 16:19:47.594329] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.594338] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.594347] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.604744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:16.886 [2024-11-20 16:19:47.614308] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.886 [2024-11-20 16:19:47.614347] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.886 [2024-11-20 16:19:47.614364] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.614373] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.614382] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.624723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:16.886 [2024-11-20 16:19:47.634352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.886 [2024-11-20 16:19:47.634393] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.886 [2024-11-20 16:19:47.634409] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.634418] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.634427] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.644690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:16.886 [2024-11-20 16:19:47.654411] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.886 [2024-11-20 16:19:47.654451] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.886 [2024-11-20 16:19:47.654470] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.654480] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.654489] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.664858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:16.886 [2024-11-20 16:19:47.674513] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:16.886 [2024-11-20 16:19:47.674555] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:16.886 [2024-11-20 16:19:47.674572] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:16.886 [2024-11-20 16:19:47.674581] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:16.886 [2024-11-20 16:19:47.674590] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:16.886 [2024-11-20 16:19:47.685174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:16.886 qpair failed and we were unable to recover it. 00:29:17.146 [2024-11-20 16:19:47.694522] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.146 [2024-11-20 16:19:47.694562] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.146 [2024-11-20 16:19:47.694579] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.146 [2024-11-20 16:19:47.694589] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.146 [2024-11-20 16:19:47.694598] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.146 [2024-11-20 16:19:47.704831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.146 qpair failed and we were unable to recover it. 00:29:17.146 [2024-11-20 16:19:47.714610] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.146 [2024-11-20 16:19:47.714651] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.146 [2024-11-20 16:19:47.714668] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.146 [2024-11-20 16:19:47.714677] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.146 [2024-11-20 16:19:47.714685] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.146 [2024-11-20 16:19:47.724970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.146 qpair failed and we were unable to recover it. 00:29:17.146 [2024-11-20 16:19:47.734804] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.146 [2024-11-20 16:19:47.734841] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.146 [2024-11-20 16:19:47.734858] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.146 [2024-11-20 16:19:47.734867] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.146 [2024-11-20 16:19:47.734879] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.146 [2024-11-20 16:19:47.745196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.146 qpair failed and we were unable to recover it. 00:29:17.146 [2024-11-20 16:19:47.754726] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.146 [2024-11-20 16:19:47.754764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.146 [2024-11-20 16:19:47.754781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.146 [2024-11-20 16:19:47.754790] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.146 [2024-11-20 16:19:47.754798] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.146 [2024-11-20 16:19:47.765009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.146 qpair failed and we were unable to recover it. 00:29:17.146 [2024-11-20 16:19:47.774834] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.146 [2024-11-20 16:19:47.774873] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.146 [2024-11-20 16:19:47.774889] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.146 [2024-11-20 16:19:47.774898] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.146 [2024-11-20 16:19:47.774907] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.146 [2024-11-20 16:19:47.785110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.146 qpair failed and we were unable to recover it. 00:29:17.146 [2024-11-20 16:19:47.794820] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.147 [2024-11-20 16:19:47.794863] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.147 [2024-11-20 16:19:47.794879] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.147 [2024-11-20 16:19:47.794888] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.147 [2024-11-20 16:19:47.794896] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.147 [2024-11-20 16:19:47.805239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.147 qpair failed and we were unable to recover it. 00:29:17.147 [2024-11-20 16:19:47.814962] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.147 [2024-11-20 16:19:47.815006] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.147 [2024-11-20 16:19:47.815022] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.147 [2024-11-20 16:19:47.815031] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.147 [2024-11-20 16:19:47.815040] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.147 [2024-11-20 16:19:47.825341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.147 qpair failed and we were unable to recover it. 00:29:17.147 [2024-11-20 16:19:47.834987] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.147 [2024-11-20 16:19:47.835031] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.147 [2024-11-20 16:19:47.835047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.147 [2024-11-20 16:19:47.835056] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.147 [2024-11-20 16:19:47.835065] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.147 [2024-11-20 16:19:47.845382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.147 qpair failed and we were unable to recover it. 00:29:17.147 [2024-11-20 16:19:47.855079] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.147 [2024-11-20 16:19:47.855119] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.147 [2024-11-20 16:19:47.855136] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.147 [2024-11-20 16:19:47.855145] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.147 [2024-11-20 16:19:47.855154] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.147 [2024-11-20 16:19:47.865520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.147 qpair failed and we were unable to recover it. 00:29:17.147 [2024-11-20 16:19:47.875054] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.147 [2024-11-20 16:19:47.875095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.147 [2024-11-20 16:19:47.875112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.147 [2024-11-20 16:19:47.875121] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.147 [2024-11-20 16:19:47.875129] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.147 [2024-11-20 16:19:47.885550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.147 qpair failed and we were unable to recover it. 00:29:17.147 [2024-11-20 16:19:47.895185] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.147 [2024-11-20 16:19:47.895229] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.147 [2024-11-20 16:19:47.895245] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.147 [2024-11-20 16:19:47.895254] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.147 [2024-11-20 16:19:47.895263] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.147 [2024-11-20 16:19:47.905564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.147 qpair failed and we were unable to recover it. 00:29:17.147 [2024-11-20 16:19:47.915098] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.147 [2024-11-20 16:19:47.915140] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.147 [2024-11-20 16:19:47.915157] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.147 [2024-11-20 16:19:47.915169] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.147 [2024-11-20 16:19:47.915178] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.147 [2024-11-20 16:19:47.925481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.147 qpair failed and we were unable to recover it. 00:29:17.147 [2024-11-20 16:19:47.935423] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.147 [2024-11-20 16:19:47.935464] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.147 [2024-11-20 16:19:47.935481] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.147 [2024-11-20 16:19:47.935490] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.147 [2024-11-20 16:19:47.935498] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.147 [2024-11-20 16:19:47.945578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.147 qpair failed and we were unable to recover it. 00:29:17.406 [2024-11-20 16:19:47.955341] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.406 [2024-11-20 16:19:47.955386] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.406 [2024-11-20 16:19:47.955402] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.406 [2024-11-20 16:19:47.955412] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.406 [2024-11-20 16:19:47.955420] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.406 [2024-11-20 16:19:47.965736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.406 qpair failed and we were unable to recover it. 00:29:17.406 [2024-11-20 16:19:47.975336] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.406 [2024-11-20 16:19:47.975373] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.406 [2024-11-20 16:19:47.975389] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.406 [2024-11-20 16:19:47.975398] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.406 [2024-11-20 16:19:47.975407] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.406 [2024-11-20 16:19:47.985808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.406 qpair failed and we were unable to recover it. 00:29:17.406 [2024-11-20 16:19:47.995352] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.406 [2024-11-20 16:19:47.995396] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.406 [2024-11-20 16:19:47.995416] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.406 [2024-11-20 16:19:47.995425] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.406 [2024-11-20 16:19:47.995434] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.406 [2024-11-20 16:19:48.005648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.406 qpair failed and we were unable to recover it. 00:29:17.406 [2024-11-20 16:19:48.015485] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.406 [2024-11-20 16:19:48.015536] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.406 [2024-11-20 16:19:48.015556] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.406 [2024-11-20 16:19:48.015565] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.406 [2024-11-20 16:19:48.015574] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.406 [2024-11-20 16:19:48.025941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.406 qpair failed and we were unable to recover it. 00:29:17.406 [2024-11-20 16:19:48.035536] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.406 [2024-11-20 16:19:48.035579] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.406 [2024-11-20 16:19:48.035596] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.406 [2024-11-20 16:19:48.035606] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.406 [2024-11-20 16:19:48.035614] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.406 [2024-11-20 16:19:48.045829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.406 qpair failed and we were unable to recover it. 00:29:17.406 [2024-11-20 16:19:48.055657] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.406 [2024-11-20 16:19:48.055698] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.406 [2024-11-20 16:19:48.055715] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.406 [2024-11-20 16:19:48.055724] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.406 [2024-11-20 16:19:48.055732] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.406 [2024-11-20 16:19:48.066002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.406 qpair failed and we were unable to recover it. 00:29:17.406 [2024-11-20 16:19:48.075635] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.406 [2024-11-20 16:19:48.075670] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.406 [2024-11-20 16:19:48.075686] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.406 [2024-11-20 16:19:48.075695] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.406 [2024-11-20 16:19:48.075703] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.406 [2024-11-20 16:19:48.086282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.406 qpair failed and we were unable to recover it. 00:29:17.406 [2024-11-20 16:19:48.095707] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.406 [2024-11-20 16:19:48.095747] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.406 [2024-11-20 16:19:48.095766] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.406 [2024-11-20 16:19:48.095775] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.406 [2024-11-20 16:19:48.095783] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.406 [2024-11-20 16:19:48.106236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.406 qpair failed and we were unable to recover it. 00:29:17.406 [2024-11-20 16:19:48.115704] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.406 [2024-11-20 16:19:48.115751] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.406 [2024-11-20 16:19:48.115767] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.406 [2024-11-20 16:19:48.115776] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.406 [2024-11-20 16:19:48.115785] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.407 [2024-11-20 16:19:48.126173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.407 qpair failed and we were unable to recover it. 00:29:17.407 [2024-11-20 16:19:48.135905] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.407 [2024-11-20 16:19:48.135941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.407 [2024-11-20 16:19:48.135957] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.407 [2024-11-20 16:19:48.135966] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.407 [2024-11-20 16:19:48.135975] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.407 [2024-11-20 16:19:48.146318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.407 qpair failed and we were unable to recover it. 00:29:17.407 [2024-11-20 16:19:48.155867] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.407 [2024-11-20 16:19:48.155909] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.407 [2024-11-20 16:19:48.155926] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.407 [2024-11-20 16:19:48.155936] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.407 [2024-11-20 16:19:48.155944] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.407 [2024-11-20 16:19:48.166193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.407 qpair failed and we were unable to recover it. 00:29:17.407 [2024-11-20 16:19:48.175928] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.407 [2024-11-20 16:19:48.175970] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.407 [2024-11-20 16:19:48.175986] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.407 [2024-11-20 16:19:48.175995] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.407 [2024-11-20 16:19:48.176004] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.407 [2024-11-20 16:19:48.186158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.407 qpair failed and we were unable to recover it. 00:29:17.407 [2024-11-20 16:19:48.195951] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.407 [2024-11-20 16:19:48.195995] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.407 [2024-11-20 16:19:48.196012] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.407 [2024-11-20 16:19:48.196022] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.407 [2024-11-20 16:19:48.196031] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.407 [2024-11-20 16:19:48.206402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.407 qpair failed and we were unable to recover it. 00:29:17.666 [2024-11-20 16:19:48.216058] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.666 [2024-11-20 16:19:48.216102] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.666 [2024-11-20 16:19:48.216118] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.666 [2024-11-20 16:19:48.216128] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.666 [2024-11-20 16:19:48.216136] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.666 [2024-11-20 16:19:48.226343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.666 qpair failed and we were unable to recover it. 00:29:17.666 [2024-11-20 16:19:48.236059] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.666 [2024-11-20 16:19:48.236095] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.666 [2024-11-20 16:19:48.236112] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.666 [2024-11-20 16:19:48.236121] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.666 [2024-11-20 16:19:48.236130] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.666 [2024-11-20 16:19:48.246538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.666 qpair failed and we were unable to recover it. 00:29:17.666 [2024-11-20 16:19:48.256092] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.666 [2024-11-20 16:19:48.256134] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.666 [2024-11-20 16:19:48.256150] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.666 [2024-11-20 16:19:48.256159] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.666 [2024-11-20 16:19:48.256167] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.666 [2024-11-20 16:19:48.266396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.666 qpair failed and we were unable to recover it. 00:29:17.666 [2024-11-20 16:19:48.276220] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.666 [2024-11-20 16:19:48.276259] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.666 [2024-11-20 16:19:48.276276] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.666 [2024-11-20 16:19:48.276285] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.666 [2024-11-20 16:19:48.276294] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.286700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.667 [2024-11-20 16:19:48.296253] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.667 [2024-11-20 16:19:48.296293] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.667 [2024-11-20 16:19:48.296310] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.667 [2024-11-20 16:19:48.296318] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.667 [2024-11-20 16:19:48.296327] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.306558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.667 [2024-11-20 16:19:48.316344] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.667 [2024-11-20 16:19:48.316381] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.667 [2024-11-20 16:19:48.316397] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.667 [2024-11-20 16:19:48.316406] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.667 [2024-11-20 16:19:48.316415] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.326654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.667 [2024-11-20 16:19:48.336325] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.667 [2024-11-20 16:19:48.336367] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.667 [2024-11-20 16:19:48.336384] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.667 [2024-11-20 16:19:48.336394] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.667 [2024-11-20 16:19:48.336403] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.346746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.667 [2024-11-20 16:19:48.356460] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.667 [2024-11-20 16:19:48.356501] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.667 [2024-11-20 16:19:48.356523] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.667 [2024-11-20 16:19:48.356536] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.667 [2024-11-20 16:19:48.356545] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.366794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.667 [2024-11-20 16:19:48.376542] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.667 [2024-11-20 16:19:48.376583] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.667 [2024-11-20 16:19:48.376600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.667 [2024-11-20 16:19:48.376609] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.667 [2024-11-20 16:19:48.376618] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.387086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.667 [2024-11-20 16:19:48.396547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.667 [2024-11-20 16:19:48.396586] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.667 [2024-11-20 16:19:48.396602] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.667 [2024-11-20 16:19:48.396611] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.667 [2024-11-20 16:19:48.396620] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.407003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.667 [2024-11-20 16:19:48.416632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.667 [2024-11-20 16:19:48.416676] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.667 [2024-11-20 16:19:48.416693] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.667 [2024-11-20 16:19:48.416702] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.667 [2024-11-20 16:19:48.416710] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.427014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.667 [2024-11-20 16:19:48.436743] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.667 [2024-11-20 16:19:48.436789] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.667 [2024-11-20 16:19:48.436806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.667 [2024-11-20 16:19:48.436815] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.667 [2024-11-20 16:19:48.436824] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.447112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.667 [2024-11-20 16:19:48.456720] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.667 [2024-11-20 16:19:48.456764] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.667 [2024-11-20 16:19:48.456781] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.667 [2024-11-20 16:19:48.456790] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.667 [2024-11-20 16:19:48.456798] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.667 [2024-11-20 16:19:48.466938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.667 qpair failed and we were unable to recover it. 00:29:17.927 [2024-11-20 16:19:48.476710] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.927 [2024-11-20 16:19:48.476755] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.927 [2024-11-20 16:19:48.476772] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.927 [2024-11-20 16:19:48.476781] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.927 [2024-11-20 16:19:48.476790] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.927 [2024-11-20 16:19:48.487126] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.927 qpair failed and we were unable to recover it. 00:29:17.927 [2024-11-20 16:19:48.496852] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.927 [2024-11-20 16:19:48.496893] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.927 [2024-11-20 16:19:48.496909] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.927 [2024-11-20 16:19:48.496918] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.927 [2024-11-20 16:19:48.496926] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.927 [2024-11-20 16:19:48.507276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.927 qpair failed and we were unable to recover it. 00:29:17.927 [2024-11-20 16:19:48.516929] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.927 [2024-11-20 16:19:48.516976] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.927 [2024-11-20 16:19:48.516992] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.927 [2024-11-20 16:19:48.517001] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.927 [2024-11-20 16:19:48.517010] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.927 [2024-11-20 16:19:48.527299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.927 qpair failed and we were unable to recover it. 00:29:17.927 [2024-11-20 16:19:48.536902] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.927 [2024-11-20 16:19:48.536941] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.927 [2024-11-20 16:19:48.536961] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.927 [2024-11-20 16:19:48.536971] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.927 [2024-11-20 16:19:48.536979] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.927 [2024-11-20 16:19:48.547281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.927 qpair failed and we were unable to recover it. 00:29:17.927 [2024-11-20 16:19:48.557138] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.927 [2024-11-20 16:19:48.557181] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.927 [2024-11-20 16:19:48.557197] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.927 [2024-11-20 16:19:48.557207] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.927 [2024-11-20 16:19:48.557216] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.927 [2024-11-20 16:19:48.567275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.927 qpair failed and we were unable to recover it. 00:29:17.927 [2024-11-20 16:19:48.577028] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.927 [2024-11-20 16:19:48.577069] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.927 [2024-11-20 16:19:48.577086] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.927 [2024-11-20 16:19:48.577095] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.927 [2024-11-20 16:19:48.577103] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.927 [2024-11-20 16:19:48.587488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.927 qpair failed and we were unable to recover it. 00:29:17.927 [2024-11-20 16:19:48.597144] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.927 [2024-11-20 16:19:48.597186] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.927 [2024-11-20 16:19:48.597203] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.927 [2024-11-20 16:19:48.597213] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.927 [2024-11-20 16:19:48.597222] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.927 [2024-11-20 16:19:48.607617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.927 qpair failed and we were unable to recover it. 00:29:17.927 [2024-11-20 16:19:48.617147] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.927 [2024-11-20 16:19:48.617191] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.927 [2024-11-20 16:19:48.617208] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.927 [2024-11-20 16:19:48.617217] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.927 [2024-11-20 16:19:48.617226] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.927 [2024-11-20 16:19:48.627541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.927 qpair failed and we were unable to recover it. 00:29:17.927 [2024-11-20 16:19:48.637196] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.927 [2024-11-20 16:19:48.637238] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.928 [2024-11-20 16:19:48.637255] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.928 [2024-11-20 16:19:48.637265] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.928 [2024-11-20 16:19:48.637274] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.928 [2024-11-20 16:19:48.647555] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.928 qpair failed and we were unable to recover it. 00:29:17.928 [2024-11-20 16:19:48.657353] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.928 [2024-11-20 16:19:48.657398] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.928 [2024-11-20 16:19:48.657414] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.928 [2024-11-20 16:19:48.657423] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.928 [2024-11-20 16:19:48.657432] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.928 [2024-11-20 16:19:48.667657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.928 qpair failed and we were unable to recover it. 00:29:17.928 [2024-11-20 16:19:48.677370] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.928 [2024-11-20 16:19:48.677410] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.928 [2024-11-20 16:19:48.677427] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.928 [2024-11-20 16:19:48.677437] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.928 [2024-11-20 16:19:48.677446] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.928 [2024-11-20 16:19:48.687825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.928 qpair failed and we were unable to recover it. 00:29:17.928 [2024-11-20 16:19:48.697541] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.928 [2024-11-20 16:19:48.697584] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.928 [2024-11-20 16:19:48.697600] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.928 [2024-11-20 16:19:48.697609] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.928 [2024-11-20 16:19:48.697617] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.928 [2024-11-20 16:19:48.707808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.928 qpair failed and we were unable to recover it. 00:29:17.928 [2024-11-20 16:19:48.717509] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:17.928 [2024-11-20 16:19:48.717558] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:17.928 [2024-11-20 16:19:48.717574] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:17.928 [2024-11-20 16:19:48.717584] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:17.928 [2024-11-20 16:19:48.717592] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:17.928 [2024-11-20 16:19:48.727893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:17.928 qpair failed and we were unable to recover it. 00:29:18.187 [2024-11-20 16:19:48.737547] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.187 [2024-11-20 16:19:48.737587] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.187 [2024-11-20 16:19:48.737604] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.187 [2024-11-20 16:19:48.737613] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.187 [2024-11-20 16:19:48.737622] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.187 [2024-11-20 16:19:48.747895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.187 qpair failed and we were unable to recover it. 00:29:18.187 [2024-11-20 16:19:48.757599] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.187 [2024-11-20 16:19:48.757642] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.187 [2024-11-20 16:19:48.757658] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.187 [2024-11-20 16:19:48.757667] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.187 [2024-11-20 16:19:48.757676] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.187 [2024-11-20 16:19:48.768033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.187 qpair failed and we were unable to recover it. 00:29:18.187 [2024-11-20 16:19:48.777728] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.187 [2024-11-20 16:19:48.777770] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.187 [2024-11-20 16:19:48.777787] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.187 [2024-11-20 16:19:48.777796] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.187 [2024-11-20 16:19:48.777804] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.187 [2024-11-20 16:19:48.788068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.187 qpair failed and we were unable to recover it. 00:29:18.187 [2024-11-20 16:19:48.797687] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.797724] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.797741] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.797750] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.797761] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.808012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.188 [2024-11-20 16:19:48.817807] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.817848] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.817865] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.817874] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.817883] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.828074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.188 [2024-11-20 16:19:48.837790] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.837831] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.837848] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.837857] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.837866] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.848189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.188 [2024-11-20 16:19:48.857985] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.858030] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.858047] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.858056] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.858065] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.868337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.188 [2024-11-20 16:19:48.877949] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.877994] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.878011] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.878020] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.878028] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.888261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.188 [2024-11-20 16:19:48.898066] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.898110] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.898128] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.898138] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.898147] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.908402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.188 [2024-11-20 16:19:48.918074] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.918116] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.918133] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.918142] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.918151] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.928443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.188 [2024-11-20 16:19:48.938176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.938224] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.938241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.938250] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.938259] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.948511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.188 [2024-11-20 16:19:48.958176] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.958213] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.958230] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.958239] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.958248] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.968649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.188 [2024-11-20 16:19:48.978223] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.188 [2024-11-20 16:19:48.978262] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.188 [2024-11-20 16:19:48.978281] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.188 [2024-11-20 16:19:48.978291] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.188 [2024-11-20 16:19:48.978300] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.188 [2024-11-20 16:19:48.988583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.188 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:48.998322] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:48.998364] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.448 [2024-11-20 16:19:48.998380] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.448 [2024-11-20 16:19:48.998389] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.448 [2024-11-20 16:19:48.998397] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.448 [2024-11-20 16:19:49.008625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.448 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:49.018452] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:49.018495] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.448 [2024-11-20 16:19:49.018512] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.448 [2024-11-20 16:19:49.018526] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.448 [2024-11-20 16:19:49.018535] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.448 [2024-11-20 16:19:49.028788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.448 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:49.038579] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:49.038620] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.448 [2024-11-20 16:19:49.038638] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.448 [2024-11-20 16:19:49.038647] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.448 [2024-11-20 16:19:49.038656] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.448 [2024-11-20 16:19:49.048933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.448 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:49.058634] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:49.058674] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.448 [2024-11-20 16:19:49.058691] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.448 [2024-11-20 16:19:49.058700] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.448 [2024-11-20 16:19:49.058708] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.448 [2024-11-20 16:19:49.069003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.448 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:49.078561] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:49.078600] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.448 [2024-11-20 16:19:49.078616] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.448 [2024-11-20 16:19:49.078625] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.448 [2024-11-20 16:19:49.078634] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.448 [2024-11-20 16:19:49.089003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.448 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:49.098719] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:49.098759] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.448 [2024-11-20 16:19:49.098774] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.448 [2024-11-20 16:19:49.098784] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.448 [2024-11-20 16:19:49.098792] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.448 [2024-11-20 16:19:49.109081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.448 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:49.118665] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:49.118709] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.448 [2024-11-20 16:19:49.118725] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.448 [2024-11-20 16:19:49.118734] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.448 [2024-11-20 16:19:49.118742] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.448 [2024-11-20 16:19:49.129091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.448 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:49.138632] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:49.138672] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.448 [2024-11-20 16:19:49.138689] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.448 [2024-11-20 16:19:49.138698] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.448 [2024-11-20 16:19:49.138706] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.448 [2024-11-20 16:19:49.149339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.448 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:49.158916] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:49.158954] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.448 [2024-11-20 16:19:49.158974] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.448 [2024-11-20 16:19:49.158983] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.448 [2024-11-20 16:19:49.158992] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.448 [2024-11-20 16:19:49.169312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.448 qpair failed and we were unable to recover it. 00:29:18.448 [2024-11-20 16:19:49.178984] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.448 [2024-11-20 16:19:49.179025] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.449 [2024-11-20 16:19:49.179041] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.449 [2024-11-20 16:19:49.179050] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.449 [2024-11-20 16:19:49.179058] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.449 [2024-11-20 16:19:49.189501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.449 qpair failed and we were unable to recover it. 00:29:18.449 [2024-11-20 16:19:49.199112] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.449 [2024-11-20 16:19:49.199148] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.449 [2024-11-20 16:19:49.199165] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.449 [2024-11-20 16:19:49.199174] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.449 [2024-11-20 16:19:49.199182] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.449 [2024-11-20 16:19:49.209563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.449 qpair failed and we were unable to recover it. 00:29:18.449 [2024-11-20 16:19:49.219108] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:18.449 [2024-11-20 16:19:49.219147] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:18.449 [2024-11-20 16:19:49.219164] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:18.449 [2024-11-20 16:19:49.219173] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:18.449 [2024-11-20 16:19:49.219181] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:18.449 [2024-11-20 16:19:49.229554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:18.449 qpair failed and we were unable to recover it. 00:29:19.828 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Read completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 Write completed with error (sct=0, sc=8) 00:29:19.829 starting I/O failed 00:29:19.829 [2024-11-20 16:19:50.235217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.829 [2024-11-20 16:19:50.241737] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.829 [2024-11-20 16:19:50.241788] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.829 [2024-11-20 16:19:50.241806] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.829 [2024-11-20 16:19:50.241816] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.829 [2024-11-20 16:19:50.241825] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d15c0 00:29:19.829 [2024-11-20 16:19:50.252362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-11-20 16:19:50.262006] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.829 [2024-11-20 16:19:50.262045] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.829 [2024-11-20 16:19:50.262061] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.829 [2024-11-20 16:19:50.262070] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.829 [2024-11-20 16:19:50.262079] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000002d15c0 00:29:19.829 [2024-11-20 16:19:50.273772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-11-20 16:19:50.273900] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:19.829 A controller has encountered a failure and is being reset. 00:29:19.829 [2024-11-20 16:19:50.282156] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.829 [2024-11-20 16:19:50.282205] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.829 [2024-11-20 16:19:50.282233] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.829 [2024-11-20 16:19:50.282247] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.829 [2024-11-20 16:19:50.282263] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:19.829 [2024-11-20 16:19:50.292488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-11-20 16:19:50.302181] ctrlr.c: 662:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:19.829 [2024-11-20 16:19:50.302223] nvme_fabric.c: 598:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:RDMA adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:19.829 [2024-11-20 16:19:50.302241] nvme_fabric.c: 609:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:19.829 [2024-11-20 16:19:50.302251] nvme_rdma.c:1404:nvme_rdma_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:19.829 [2024-11-20 16:19:50.302259] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:19.829 [2024-11-20 16:19:50.312603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:19.829 qpair failed and we were unable to recover it. 00:29:19.829 [2024-11-20 16:19:50.312733] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:19.829 [2024-11-20 16:19:50.347118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:19.829 Controller properly reset. 00:29:19.829 Initializing NVMe Controllers 00:29:19.829 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.829 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.829 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:19.829 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:19.829 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:19.829 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:19.829 Initialization complete. Launching workers. 00:29:19.829 Starting thread on core 1 00:29:19.829 Starting thread on core 2 00:29:19.829 Starting thread on core 3 00:29:19.829 Starting thread on core 0 00:29:19.829 16:19:50 -- host/target_disconnect.sh@59 -- # sync 00:29:19.829 00:29:19.829 real 0m12.569s 00:29:19.829 user 0m27.425s 00:29:19.829 sys 0m3.066s 00:29:19.829 16:19:50 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:19.829 16:19:50 -- common/autotest_common.sh@10 -- # set +x 00:29:19.829 ************************************ 00:29:19.829 END TEST nvmf_target_disconnect_tc2 00:29:19.829 ************************************ 00:29:19.829 16:19:50 -- host/target_disconnect.sh@80 -- # '[' -n 192.168.100.9 ']' 00:29:19.829 16:19:50 -- host/target_disconnect.sh@81 -- # run_test nvmf_target_disconnect_tc3 nvmf_target_disconnect_tc3 00:29:19.829 16:19:50 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:29:19.829 16:19:50 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:19.829 16:19:50 -- common/autotest_common.sh@10 -- # set +x 00:29:19.829 ************************************ 00:29:19.829 START TEST nvmf_target_disconnect_tc3 00:29:19.829 ************************************ 00:29:19.829 16:19:50 -- common/autotest_common.sh@1114 -- # nvmf_target_disconnect_tc3 00:29:19.829 16:19:50 -- host/target_disconnect.sh@65 -- # reconnectpid=1507580 00:29:19.829 16:19:50 -- host/target_disconnect.sh@67 -- # sleep 2 00:29:19.829 16:19:50 -- host/target_disconnect.sh@63 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:rdma adrfam:IPv4 traddr:192.168.100.8 trsvcid:4420 alt_traddr:192.168.100.9' 00:29:19.829 EAL: No free 2048 kB hugepages reported on node 1 00:29:21.742 16:19:52 -- host/target_disconnect.sh@68 -- # kill -9 1506208 00:29:21.742 16:19:52 -- host/target_disconnect.sh@70 -- # sleep 2 00:29:23.123 Write completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Write completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Write completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Read completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Write completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Read completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Write completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Write completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Read completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Write completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Read completed with error (sct=0, sc=8) 00:29:23.123 starting I/O failed 00:29:23.123 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Write completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 Read completed with error (sct=0, sc=8) 00:29:23.124 starting I/O failed 00:29:23.124 [2024-11-20 16:19:53.649010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:23.692 /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 62: 1506208 Killed "${NVMF_APP[@]}" "$@" 00:29:23.692 16:19:54 -- host/target_disconnect.sh@71 -- # disconnect_init 192.168.100.9 00:29:23.692 16:19:54 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:23.692 16:19:54 -- nvmf/common.sh@467 -- # timing_enter start_nvmf_tgt 00:29:23.692 16:19:54 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:23.692 16:19:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.692 16:19:54 -- nvmf/common.sh@469 -- # nvmfpid=1508157 00:29:23.692 16:19:54 -- nvmf/common.sh@470 -- # waitforlisten 1508157 00:29:23.692 16:19:54 -- nvmf/common.sh@468 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:23.692 16:19:54 -- common/autotest_common.sh@829 -- # '[' -z 1508157 ']' 00:29:23.693 16:19:54 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.693 16:19:54 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:23.693 16:19:54 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.693 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.693 16:19:54 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:23.693 16:19:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.952 [2024-11-20 16:19:54.528244] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:23.952 [2024-11-20 16:19:54.528297] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.952 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.952 [2024-11-20 16:19:54.617607] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Write completed with error (sct=0, sc=8) 00:29:23.952 starting I/O failed 00:29:23.952 Read completed with error (sct=0, sc=8) 00:29:23.953 starting I/O failed 00:29:23.953 Write completed with error (sct=0, sc=8) 00:29:23.953 starting I/O failed 00:29:23.953 [2024-11-20 16:19:54.654147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:23.953 [2024-11-20 16:19:54.654179] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:23.953 [2024-11-20 16:19:54.654285] app.c: 488:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:23.953 [2024-11-20 16:19:54.654295] app.c: 489:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:23.953 [2024-11-20 16:19:54.654303] app.c: 494:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:23.953 [2024-11-20 16:19:54.654424] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:29:23.953 [2024-11-20 16:19:54.654552] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:29:23.953 [2024-11-20 16:19:54.654661] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:29:23.953 [2024-11-20 16:19:54.654663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:29:24.892 16:19:55 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:24.892 16:19:55 -- common/autotest_common.sh@862 -- # return 0 00:29:24.892 16:19:55 -- nvmf/common.sh@471 -- # timing_exit start_nvmf_tgt 00:29:24.892 16:19:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:24.892 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:29:24.892 16:19:55 -- nvmf/common.sh@472 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:24.892 16:19:55 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:24.892 16:19:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.892 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:29:24.892 Malloc0 00:29:24.892 16:19:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.892 16:19:55 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t rdma --num-shared-buffers 1024 00:29:24.892 16:19:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.892 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:29:24.892 [2024-11-20 16:19:55.441055] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0x1806ab0/0x1812580) succeed. 00:29:24.892 [2024-11-20 16:19:55.450380] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0x1808050/0x1853c20) succeed. 00:29:24.892 16:19:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.892 16:19:55 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:24.892 16:19:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.892 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:29:24.892 16:19:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.892 16:19:55 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:24.892 16:19:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.892 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:29:24.892 16:19:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.892 16:19:55 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t rdma -a 192.168.100.9 -s 4420 00:29:24.892 16:19:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.892 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:29:24.892 [2024-11-20 16:19:55.592141] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.9 port 4420 *** 00:29:24.892 16:19:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.892 16:19:55 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t rdma -a 192.168.100.9 -s 4420 00:29:24.892 16:19:55 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.892 16:19:55 -- common/autotest_common.sh@10 -- # set +x 00:29:24.892 16:19:55 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.892 16:19:55 -- host/target_disconnect.sh@73 -- # wait 1507580 00:29:24.892 Write completed with error (sct=0, sc=8) 00:29:24.892 starting I/O failed 00:29:24.892 Read completed with error (sct=0, sc=8) 00:29:24.892 starting I/O failed 00:29:24.892 Read completed with error (sct=0, sc=8) 00:29:24.892 starting I/O failed 00:29:24.892 Read completed with error (sct=0, sc=8) 00:29:24.892 starting I/O failed 00:29:24.892 Write completed with error (sct=0, sc=8) 00:29:24.892 starting I/O failed 00:29:24.892 Read completed with error (sct=0, sc=8) 00:29:24.892 starting I/O failed 00:29:24.892 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Write completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Write completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Write completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Write completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Write completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Write completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Write completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Write completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 Read completed with error (sct=0, sc=8) 00:29:24.893 starting I/O failed 00:29:24.893 [2024-11-20 16:19:55.659394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:24.893 [2024-11-20 16:19:55.661007] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:24.893 [2024-11-20 16:19:55.661027] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:24.893 [2024-11-20 16:19:55.661035] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:26.273 [2024-11-20 16:19:56.664697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:26.273 qpair failed and we were unable to recover it. 00:29:26.273 [2024-11-20 16:19:56.666082] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:26.273 [2024-11-20 16:19:56.666100] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:26.273 [2024-11-20 16:19:56.666112] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:27.209 [2024-11-20 16:19:57.669951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:27.209 qpair failed and we were unable to recover it. 00:29:27.209 [2024-11-20 16:19:57.671361] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:27.209 [2024-11-20 16:19:57.671379] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:27.209 [2024-11-20 16:19:57.671387] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:28.148 [2024-11-20 16:19:58.675270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:28.148 qpair failed and we were unable to recover it. 00:29:28.148 [2024-11-20 16:19:58.676775] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:28.148 [2024-11-20 16:19:58.676792] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:28.148 [2024-11-20 16:19:58.676800] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:29.086 [2024-11-20 16:19:59.680574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:29.086 qpair failed and we were unable to recover it. 00:29:29.086 [2024-11-20 16:19:59.681975] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:29.086 [2024-11-20 16:19:59.681992] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:29.086 [2024-11-20 16:19:59.682000] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:30.025 [2024-11-20 16:20:00.685846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.025 qpair failed and we were unable to recover it. 00:29:30.025 [2024-11-20 16:20:00.687301] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:30.025 [2024-11-20 16:20:00.687319] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:30.025 [2024-11-20 16:20:00.687327] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:30.975 [2024-11-20 16:20:01.691171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:30.975 qpair failed and we were unable to recover it. 00:29:30.975 [2024-11-20 16:20:01.692637] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:30.975 [2024-11-20 16:20:01.692653] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:30.975 [2024-11-20 16:20:01.692661] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d3080 00:29:31.912 [2024-11-20 16:20:02.696420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:29:31.912 qpair failed and we were unable to recover it. 00:29:31.912 [2024-11-20 16:20:02.698150] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:31.912 [2024-11-20 16:20:02.698174] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:31.912 [2024-11-20 16:20:02.698183] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:33.291 [2024-11-20 16:20:03.701971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:33.291 qpair failed and we were unable to recover it. 00:29:33.291 [2024-11-20 16:20:03.703454] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:33.291 [2024-11-20 16:20:03.703471] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:33.291 [2024-11-20 16:20:03.703480] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003cf840 00:29:34.229 [2024-11-20 16:20:04.707423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:29:34.229 qpair failed and we were unable to recover it. 00:29:34.229 [2024-11-20 16:20:04.707587] nvme_ctrlr.c:4339:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:29:34.229 A controller has encountered a failure and is being reset. 00:29:34.229 Resorting to new failover address 192.168.100.9 00:29:34.229 [2024-11-20 16:20:04.709219] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:34.229 [2024-11-20 16:20:04.709248] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:34.229 [2024-11-20 16:20:04.709260] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:35.168 [2024-11-20 16:20:05.713135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:35.168 qpair failed and we were unable to recover it. 00:29:35.168 [2024-11-20 16:20:05.714546] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 8) 00:29:35.168 [2024-11-20 16:20:05.714564] nvme_rdma.c:1163:nvme_rdma_connect_established: *ERROR*: RDMA connect error -74 00:29:35.168 [2024-11-20 16:20:05.714572] nvme_rdma.c:2730:nvme_rdma_qpair_process_completions: *ERROR*: Failed to connect rqpair=0x2000003d4c80 00:29:36.107 [2024-11-20 16:20:06.718274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:29:36.107 qpair failed and we were unable to recover it. 00:29:36.107 [2024-11-20 16:20:06.718373] nvme_ctrlr.c:1029:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:36.107 [2024-11-20 16:20:06.718481] nvme_rdma.c: 617:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_DISCONNECTED but received RDMA_CM_EVENT_TIMEWAIT_EXIT (15) from CM event channel (status = 0) 00:29:36.107 [2024-11-20 16:20:06.720579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 0 00:29:36.107 Controller properly reset. 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Write completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 Read completed with error (sct=0, sc=8) 00:29:37.047 starting I/O failed 00:29:37.047 [2024-11-20 16:20:07.763952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:29:37.047 Initializing NVMe Controllers 00:29:37.047 Attaching to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.047 Attached to NVMe over Fabrics controller at 192.168.100.8:4420: nqn.2016-06.io.spdk:cnode1 00:29:37.047 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:37.047 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:37.047 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:37.047 Associating RDMA (addr:192.168.100.8 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:37.047 Initialization complete. Launching workers. 00:29:37.047 Starting thread on core 1 00:29:37.047 Starting thread on core 2 00:29:37.047 Starting thread on core 3 00:29:37.047 Starting thread on core 0 00:29:37.047 16:20:07 -- host/target_disconnect.sh@74 -- # sync 00:29:37.047 00:29:37.047 real 0m17.347s 00:29:37.047 user 0m59.708s 00:29:37.047 sys 0m5.381s 00:29:37.047 16:20:07 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:37.047 16:20:07 -- common/autotest_common.sh@10 -- # set +x 00:29:37.047 ************************************ 00:29:37.047 END TEST nvmf_target_disconnect_tc3 00:29:37.047 ************************************ 00:29:37.307 16:20:07 -- host/target_disconnect.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:29:37.307 16:20:07 -- host/target_disconnect.sh@85 -- # nvmftestfini 00:29:37.307 16:20:07 -- nvmf/common.sh@476 -- # nvmfcleanup 00:29:37.307 16:20:07 -- nvmf/common.sh@116 -- # sync 00:29:37.307 16:20:07 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:29:37.307 16:20:07 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:29:37.307 16:20:07 -- nvmf/common.sh@119 -- # set +e 00:29:37.307 16:20:07 -- nvmf/common.sh@120 -- # for i in {1..20} 00:29:37.307 16:20:07 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:29:37.307 rmmod nvme_rdma 00:29:37.307 rmmod nvme_fabrics 00:29:37.307 16:20:07 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:29:37.307 16:20:07 -- nvmf/common.sh@123 -- # set -e 00:29:37.307 16:20:07 -- nvmf/common.sh@124 -- # return 0 00:29:37.307 16:20:07 -- nvmf/common.sh@477 -- # '[' -n 1508157 ']' 00:29:37.307 16:20:07 -- nvmf/common.sh@478 -- # killprocess 1508157 00:29:37.307 16:20:07 -- common/autotest_common.sh@936 -- # '[' -z 1508157 ']' 00:29:37.307 16:20:07 -- common/autotest_common.sh@940 -- # kill -0 1508157 00:29:37.307 16:20:07 -- common/autotest_common.sh@941 -- # uname 00:29:37.307 16:20:07 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:37.307 16:20:07 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1508157 00:29:37.307 16:20:07 -- common/autotest_common.sh@942 -- # process_name=reactor_4 00:29:37.307 16:20:07 -- common/autotest_common.sh@946 -- # '[' reactor_4 = sudo ']' 00:29:37.307 16:20:07 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1508157' 00:29:37.307 killing process with pid 1508157 00:29:37.307 16:20:07 -- common/autotest_common.sh@955 -- # kill 1508157 00:29:37.307 16:20:07 -- common/autotest_common.sh@960 -- # wait 1508157 00:29:37.567 16:20:08 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:29:37.567 16:20:08 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:29:37.567 00:29:37.567 real 0m38.362s 00:29:37.567 user 2m23.732s 00:29:37.567 sys 0m14.237s 00:29:37.567 16:20:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:37.567 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:29:37.567 ************************************ 00:29:37.567 END TEST nvmf_target_disconnect 00:29:37.567 ************************************ 00:29:37.567 16:20:08 -- nvmf/nvmf.sh@127 -- # timing_exit host 00:29:37.567 16:20:08 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:37.567 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:29:37.567 16:20:08 -- nvmf/nvmf.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:29:37.567 00:29:37.567 real 21m8.718s 00:29:37.567 user 67m53.961s 00:29:37.567 sys 4m55.855s 00:29:37.567 16:20:08 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:29:37.567 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:29:37.567 ************************************ 00:29:37.567 END TEST nvmf_rdma 00:29:37.567 ************************************ 00:29:37.567 16:20:08 -- spdk/autotest.sh@280 -- # run_test spdkcli_nvmf_rdma /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:37.567 16:20:08 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:37.567 16:20:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:37.567 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:29:37.567 ************************************ 00:29:37.567 START TEST spdkcli_nvmf_rdma 00:29:37.567 ************************************ 00:29:37.567 16:20:08 -- common/autotest_common.sh@1114 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=rdma 00:29:37.827 * Looking for test storage... 00:29:37.827 * Found test storage at /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli 00:29:37.827 16:20:08 -- common/autotest_common.sh@1689 -- # [[ y == y ]] 00:29:37.827 16:20:08 -- common/autotest_common.sh@1690 -- # lcov --version 00:29:37.827 16:20:08 -- common/autotest_common.sh@1690 -- # awk '{print $NF}' 00:29:37.827 16:20:08 -- common/autotest_common.sh@1690 -- # lt 1.15 2 00:29:37.827 16:20:08 -- scripts/common.sh@372 -- # cmp_versions 1.15 '<' 2 00:29:37.827 16:20:08 -- scripts/common.sh@332 -- # local ver1 ver1_l 00:29:37.827 16:20:08 -- scripts/common.sh@333 -- # local ver2 ver2_l 00:29:37.827 16:20:08 -- scripts/common.sh@335 -- # IFS=.-: 00:29:37.827 16:20:08 -- scripts/common.sh@335 -- # read -ra ver1 00:29:37.827 16:20:08 -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.827 16:20:08 -- scripts/common.sh@336 -- # read -ra ver2 00:29:37.827 16:20:08 -- scripts/common.sh@337 -- # local 'op=<' 00:29:37.827 16:20:08 -- scripts/common.sh@339 -- # ver1_l=2 00:29:37.827 16:20:08 -- scripts/common.sh@340 -- # ver2_l=1 00:29:37.827 16:20:08 -- scripts/common.sh@342 -- # local lt=0 gt=0 eq=0 v 00:29:37.827 16:20:08 -- scripts/common.sh@343 -- # case "$op" in 00:29:37.827 16:20:08 -- scripts/common.sh@344 -- # : 1 00:29:37.827 16:20:08 -- scripts/common.sh@363 -- # (( v = 0 )) 00:29:37.827 16:20:08 -- scripts/common.sh@363 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.827 16:20:08 -- scripts/common.sh@364 -- # decimal 1 00:29:37.827 16:20:08 -- scripts/common.sh@352 -- # local d=1 00:29:37.827 16:20:08 -- scripts/common.sh@353 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.827 16:20:08 -- scripts/common.sh@354 -- # echo 1 00:29:37.827 16:20:08 -- scripts/common.sh@364 -- # ver1[v]=1 00:29:37.827 16:20:08 -- scripts/common.sh@365 -- # decimal 2 00:29:37.827 16:20:08 -- scripts/common.sh@352 -- # local d=2 00:29:37.827 16:20:08 -- scripts/common.sh@353 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.827 16:20:08 -- scripts/common.sh@354 -- # echo 2 00:29:37.827 16:20:08 -- scripts/common.sh@365 -- # ver2[v]=2 00:29:37.827 16:20:08 -- scripts/common.sh@366 -- # (( ver1[v] > ver2[v] )) 00:29:37.827 16:20:08 -- scripts/common.sh@367 -- # (( ver1[v] < ver2[v] )) 00:29:37.827 16:20:08 -- scripts/common.sh@367 -- # return 0 00:29:37.827 16:20:08 -- common/autotest_common.sh@1691 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.827 16:20:08 -- common/autotest_common.sh@1703 -- # export 'LCOV_OPTS= 00:29:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.827 --rc genhtml_branch_coverage=1 00:29:37.827 --rc genhtml_function_coverage=1 00:29:37.827 --rc genhtml_legend=1 00:29:37.827 --rc geninfo_all_blocks=1 00:29:37.827 --rc geninfo_unexecuted_blocks=1 00:29:37.827 00:29:37.827 ' 00:29:37.827 16:20:08 -- common/autotest_common.sh@1703 -- # LCOV_OPTS=' 00:29:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.827 --rc genhtml_branch_coverage=1 00:29:37.827 --rc genhtml_function_coverage=1 00:29:37.827 --rc genhtml_legend=1 00:29:37.827 --rc geninfo_all_blocks=1 00:29:37.827 --rc geninfo_unexecuted_blocks=1 00:29:37.827 00:29:37.827 ' 00:29:37.827 16:20:08 -- common/autotest_common.sh@1704 -- # export 'LCOV=lcov 00:29:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.827 --rc genhtml_branch_coverage=1 00:29:37.827 --rc genhtml_function_coverage=1 00:29:37.827 --rc genhtml_legend=1 00:29:37.827 --rc geninfo_all_blocks=1 00:29:37.827 --rc geninfo_unexecuted_blocks=1 00:29:37.827 00:29:37.827 ' 00:29:37.827 16:20:08 -- common/autotest_common.sh@1704 -- # LCOV='lcov 00:29:37.827 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.827 --rc genhtml_branch_coverage=1 00:29:37.827 --rc genhtml_function_coverage=1 00:29:37.827 --rc genhtml_legend=1 00:29:37.827 --rc geninfo_all_blocks=1 00:29:37.827 --rc geninfo_unexecuted_blocks=1 00:29:37.827 00:29:37.827 ' 00:29:37.827 16:20:08 -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/common.sh 00:29:37.827 16:20:08 -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:29:37.827 16:20:08 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-phy-autotest/spdk/test/json_config/clear_config.py 00:29:37.827 16:20:08 -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.827 16:20:08 -- nvmf/common.sh@7 -- # uname -s 00:29:37.827 16:20:08 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.827 16:20:08 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.827 16:20:08 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.827 16:20:08 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.827 16:20:08 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.827 16:20:08 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.827 16:20:08 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.827 16:20:08 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.827 16:20:08 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.827 16:20:08 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.827 16:20:08 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:8013ee90-59d8-e711-906e-00163566263e 00:29:37.827 16:20:08 -- nvmf/common.sh@18 -- # NVME_HOSTID=8013ee90-59d8-e711-906e-00163566263e 00:29:37.827 16:20:08 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.827 16:20:08 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.827 16:20:08 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.827 16:20:08 -- nvmf/common.sh@44 -- # source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:29:37.827 16:20:08 -- scripts/common.sh@433 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.827 16:20:08 -- scripts/common.sh@441 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.827 16:20:08 -- scripts/common.sh@442 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.827 16:20:08 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.827 16:20:08 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.827 16:20:08 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.827 16:20:08 -- paths/export.sh@5 -- # export PATH 00:29:37.827 16:20:08 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.827 16:20:08 -- nvmf/common.sh@46 -- # : 0 00:29:37.827 16:20:08 -- nvmf/common.sh@47 -- # export NVMF_APP_SHM_ID 00:29:37.827 16:20:08 -- nvmf/common.sh@48 -- # build_nvmf_app_args 00:29:37.827 16:20:08 -- nvmf/common.sh@24 -- # '[' 0 -eq 1 ']' 00:29:37.827 16:20:08 -- nvmf/common.sh@28 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.827 16:20:08 -- nvmf/common.sh@30 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.827 16:20:08 -- nvmf/common.sh@32 -- # '[' -n '' ']' 00:29:37.827 16:20:08 -- nvmf/common.sh@34 -- # '[' 0 -eq 1 ']' 00:29:37.827 16:20:08 -- nvmf/common.sh@50 -- # have_pci_nics=0 00:29:37.827 16:20:08 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:29:37.827 16:20:08 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:29:37.827 16:20:08 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:29:37.827 16:20:08 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:29:37.827 16:20:08 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:37.827 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:29:37.827 16:20:08 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:29:37.828 16:20:08 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1510696 00:29:37.828 16:20:08 -- spdkcli/common.sh@34 -- # waitforlisten 1510696 00:29:37.828 16:20:08 -- common/autotest_common.sh@829 -- # '[' -z 1510696 ']' 00:29:37.828 16:20:08 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:37.828 16:20:08 -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:29:37.828 16:20:08 -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:37.828 16:20:08 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:37.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:37.828 16:20:08 -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:37.828 16:20:08 -- common/autotest_common.sh@10 -- # set +x 00:29:37.828 [2024-11-20 16:20:08.617016] Starting SPDK v24.01.1-pre git sha1 c13c99a5e / DPDK 22.11.4 initialization... 00:29:37.828 [2024-11-20 16:20:08.617068] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1510696 ] 00:29:38.087 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.087 [2024-11-20 16:20:08.686620] app.c: 798:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:38.087 [2024-11-20 16:20:08.724241] trace_flags.c: 278:trace_register_description: *ERROR*: name (RDMA_REQ_RDY_TO_COMPL_PEND) too long 00:29:38.087 [2024-11-20 16:20:08.724383] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.087 [2024-11-20 16:20:08.724385] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:38.655 16:20:09 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:38.655 16:20:09 -- common/autotest_common.sh@862 -- # return 0 00:29:38.655 16:20:09 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:29:38.655 16:20:09 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:38.655 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:29:38.914 16:20:09 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:29:38.914 16:20:09 -- spdkcli/nvmf.sh@22 -- # [[ rdma == \r\d\m\a ]] 00:29:38.914 16:20:09 -- spdkcli/nvmf.sh@23 -- # nvmftestinit 00:29:38.914 16:20:09 -- nvmf/common.sh@429 -- # '[' -z rdma ']' 00:29:38.914 16:20:09 -- nvmf/common.sh@434 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.914 16:20:09 -- nvmf/common.sh@436 -- # prepare_net_devs 00:29:38.914 16:20:09 -- nvmf/common.sh@398 -- # local -g is_hw=no 00:29:38.914 16:20:09 -- nvmf/common.sh@400 -- # remove_spdk_ns 00:29:38.914 16:20:09 -- nvmf/common.sh@616 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:38.914 16:20:09 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:38.914 16:20:09 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:38.914 16:20:09 -- nvmf/common.sh@402 -- # [[ phy != virt ]] 00:29:38.914 16:20:09 -- nvmf/common.sh@402 -- # gather_supported_nvmf_pci_devs 00:29:38.914 16:20:09 -- nvmf/common.sh@284 -- # xtrace_disable 00:29:38.914 16:20:09 -- common/autotest_common.sh@10 -- # set +x 00:29:45.486 16:20:16 -- nvmf/common.sh@288 -- # local intel=0x8086 mellanox=0x15b3 pci 00:29:45.486 16:20:16 -- nvmf/common.sh@290 -- # pci_devs=() 00:29:45.486 16:20:16 -- nvmf/common.sh@290 -- # local -a pci_devs 00:29:45.486 16:20:16 -- nvmf/common.sh@291 -- # pci_net_devs=() 00:29:45.486 16:20:16 -- nvmf/common.sh@291 -- # local -a pci_net_devs 00:29:45.486 16:20:16 -- nvmf/common.sh@292 -- # pci_drivers=() 00:29:45.486 16:20:16 -- nvmf/common.sh@292 -- # local -A pci_drivers 00:29:45.486 16:20:16 -- nvmf/common.sh@294 -- # net_devs=() 00:29:45.486 16:20:16 -- nvmf/common.sh@294 -- # local -ga net_devs 00:29:45.486 16:20:16 -- nvmf/common.sh@295 -- # e810=() 00:29:45.486 16:20:16 -- nvmf/common.sh@295 -- # local -ga e810 00:29:45.486 16:20:16 -- nvmf/common.sh@296 -- # x722=() 00:29:45.486 16:20:16 -- nvmf/common.sh@296 -- # local -ga x722 00:29:45.486 16:20:16 -- nvmf/common.sh@297 -- # mlx=() 00:29:45.486 16:20:16 -- nvmf/common.sh@297 -- # local -ga mlx 00:29:45.486 16:20:16 -- nvmf/common.sh@300 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@303 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@305 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@307 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@309 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@311 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@313 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@316 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.486 16:20:16 -- nvmf/common.sh@319 -- # pci_devs+=("${e810[@]}") 00:29:45.486 16:20:16 -- nvmf/common.sh@320 -- # [[ rdma == rdma ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@321 -- # pci_devs+=("${x722[@]}") 00:29:45.486 16:20:16 -- nvmf/common.sh@322 -- # pci_devs+=("${mlx[@]}") 00:29:45.486 16:20:16 -- nvmf/common.sh@326 -- # [[ mlx5 == mlx5 ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@327 -- # pci_devs=("${mlx[@]}") 00:29:45.486 16:20:16 -- nvmf/common.sh@334 -- # (( 2 == 0 )) 00:29:45.486 16:20:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:45.486 16:20:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.0 (0x15b3 - 0x1015)' 00:29:45.486 Found 0000:d9:00.0 (0x15b3 - 0x1015) 00:29:45.486 16:20:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:45.486 16:20:16 -- nvmf/common.sh@339 -- # for pci in "${pci_devs[@]}" 00:29:45.486 16:20:16 -- nvmf/common.sh@340 -- # echo 'Found 0000:d9:00.1 (0x15b3 - 0x1015)' 00:29:45.486 Found 0000:d9:00.1 (0x15b3 - 0x1015) 00:29:45.486 16:20:16 -- nvmf/common.sh@341 -- # [[ mlx5_core == unknown ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@345 -- # [[ mlx5_core == unbound ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@349 -- # [[ 0x1015 == \0\x\1\0\1\7 ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@350 -- # [[ 0x1015 == \0\x\1\0\1\9 ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@351 -- # [[ rdma == rdma ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@361 -- # NVME_CONNECT='nvme connect -i 15' 00:29:45.486 16:20:16 -- nvmf/common.sh@365 -- # (( 0 > 0 )) 00:29:45.486 16:20:16 -- nvmf/common.sh@371 -- # [[ mlx5 == e810 ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:45.486 16:20:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.486 16:20:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:45.486 16:20:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.486 16:20:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.0: mlx_0_0' 00:29:45.486 Found net devices under 0000:d9:00.0: mlx_0_0 00:29:45.486 16:20:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.486 16:20:16 -- nvmf/common.sh@381 -- # for pci in "${pci_devs[@]}" 00:29:45.486 16:20:16 -- nvmf/common.sh@382 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.486 16:20:16 -- nvmf/common.sh@383 -- # (( 1 == 0 )) 00:29:45.486 16:20:16 -- nvmf/common.sh@387 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.486 16:20:16 -- nvmf/common.sh@388 -- # echo 'Found net devices under 0000:d9:00.1: mlx_0_1' 00:29:45.486 Found net devices under 0000:d9:00.1: mlx_0_1 00:29:45.486 16:20:16 -- nvmf/common.sh@389 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.486 16:20:16 -- nvmf/common.sh@392 -- # (( 2 == 0 )) 00:29:45.486 16:20:16 -- nvmf/common.sh@402 -- # is_hw=yes 00:29:45.486 16:20:16 -- nvmf/common.sh@404 -- # [[ yes == yes ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@405 -- # [[ rdma == tcp ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@407 -- # [[ rdma == rdma ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@408 -- # rdma_device_init 00:29:45.486 16:20:16 -- nvmf/common.sh@489 -- # load_ib_rdma_modules 00:29:45.486 16:20:16 -- nvmf/common.sh@57 -- # uname 00:29:45.486 16:20:16 -- nvmf/common.sh@57 -- # '[' Linux '!=' Linux ']' 00:29:45.486 16:20:16 -- nvmf/common.sh@61 -- # modprobe ib_cm 00:29:45.486 16:20:16 -- nvmf/common.sh@62 -- # modprobe ib_core 00:29:45.486 16:20:16 -- nvmf/common.sh@63 -- # modprobe ib_umad 00:29:45.486 16:20:16 -- nvmf/common.sh@64 -- # modprobe ib_uverbs 00:29:45.486 16:20:16 -- nvmf/common.sh@65 -- # modprobe iw_cm 00:29:45.486 16:20:16 -- nvmf/common.sh@66 -- # modprobe rdma_cm 00:29:45.486 16:20:16 -- nvmf/common.sh@67 -- # modprobe rdma_ucm 00:29:45.486 16:20:16 -- nvmf/common.sh@490 -- # allocate_nic_ips 00:29:45.486 16:20:16 -- nvmf/common.sh@71 -- # (( count = NVMF_IP_LEAST_ADDR )) 00:29:45.486 16:20:16 -- nvmf/common.sh@72 -- # get_rdma_if_list 00:29:45.486 16:20:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:45.486 16:20:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:45.486 16:20:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:45.486 16:20:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:45.486 16:20:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:45.486 16:20:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:45.486 16:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.486 16:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:45.486 16:20:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:45.487 16:20:16 -- nvmf/common.sh@104 -- # continue 2 00:29:45.487 16:20:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:45.487 16:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.487 16:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:45.487 16:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.487 16:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:45.487 16:20:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:45.487 16:20:16 -- nvmf/common.sh@104 -- # continue 2 00:29:45.487 16:20:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:45.487 16:20:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_0 00:29:45.487 16:20:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:45.487 16:20:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:45.487 16:20:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:45.487 16:20:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:45.487 16:20:16 -- nvmf/common.sh@73 -- # ip=192.168.100.8 00:29:45.487 16:20:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.8 ]] 00:29:45.487 16:20:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_0 00:29:45.487 6: mlx_0_0: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:45.487 link/ether ec:0d:9a:8b:2d:dc brd ff:ff:ff:ff:ff:ff 00:29:45.487 altname enp217s0f0np0 00:29:45.487 altname ens818f0np0 00:29:45.487 inet 192.168.100.8/24 scope global mlx_0_0 00:29:45.487 valid_lft forever preferred_lft forever 00:29:45.487 16:20:16 -- nvmf/common.sh@72 -- # for nic_name in $(get_rdma_if_list) 00:29:45.487 16:20:16 -- nvmf/common.sh@73 -- # get_ip_address mlx_0_1 00:29:45.487 16:20:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:45.487 16:20:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:45.487 16:20:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:45.487 16:20:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:45.487 16:20:16 -- nvmf/common.sh@73 -- # ip=192.168.100.9 00:29:45.487 16:20:16 -- nvmf/common.sh@74 -- # [[ -z 192.168.100.9 ]] 00:29:45.487 16:20:16 -- nvmf/common.sh@80 -- # ip addr show mlx_0_1 00:29:45.487 7: mlx_0_1: mtu 1500 qdisc mq state DOWN group default qlen 1000 00:29:45.487 link/ether ec:0d:9a:8b:2d:dd brd ff:ff:ff:ff:ff:ff 00:29:45.487 altname enp217s0f1np1 00:29:45.487 altname ens818f1np1 00:29:45.487 inet 192.168.100.9/24 scope global mlx_0_1 00:29:45.487 valid_lft forever preferred_lft forever 00:29:45.487 16:20:16 -- nvmf/common.sh@410 -- # return 0 00:29:45.487 16:20:16 -- nvmf/common.sh@438 -- # '[' '' == iso ']' 00:29:45.487 16:20:16 -- nvmf/common.sh@442 -- # NVMF_TRANSPORT_OPTS='-t rdma' 00:29:45.487 16:20:16 -- nvmf/common.sh@443 -- # [[ rdma == \r\d\m\a ]] 00:29:45.487 16:20:16 -- nvmf/common.sh@444 -- # get_available_rdma_ips 00:29:45.487 16:20:16 -- nvmf/common.sh@85 -- # get_rdma_if_list 00:29:45.487 16:20:16 -- nvmf/common.sh@91 -- # local net_dev rxe_net_dev rxe_net_devs 00:29:45.487 16:20:16 -- nvmf/common.sh@93 -- # mapfile -t rxe_net_devs 00:29:45.487 16:20:16 -- nvmf/common.sh@93 -- # rxe_cfg rxe-net 00:29:45.487 16:20:16 -- nvmf/common.sh@53 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/rxe_cfg_small.sh rxe-net 00:29:45.487 16:20:16 -- nvmf/common.sh@95 -- # (( 2 == 0 )) 00:29:45.487 16:20:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:45.487 16:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.487 16:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_0 == \m\l\x\_\0\_\0 ]] 00:29:45.487 16:20:16 -- nvmf/common.sh@103 -- # echo mlx_0_0 00:29:45.487 16:20:16 -- nvmf/common.sh@104 -- # continue 2 00:29:45.487 16:20:16 -- nvmf/common.sh@100 -- # for net_dev in "${net_devs[@]}" 00:29:45.487 16:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.487 16:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\0 ]] 00:29:45.487 16:20:16 -- nvmf/common.sh@101 -- # for rxe_net_dev in "${rxe_net_devs[@]}" 00:29:45.487 16:20:16 -- nvmf/common.sh@102 -- # [[ mlx_0_1 == \m\l\x\_\0\_\1 ]] 00:29:45.487 16:20:16 -- nvmf/common.sh@103 -- # echo mlx_0_1 00:29:45.487 16:20:16 -- nvmf/common.sh@104 -- # continue 2 00:29:45.487 16:20:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:45.487 16:20:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_0 00:29:45.487 16:20:16 -- nvmf/common.sh@111 -- # interface=mlx_0_0 00:29:45.746 16:20:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_0 00:29:45.746 16:20:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:45.746 16:20:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:45.746 16:20:16 -- nvmf/common.sh@85 -- # for nic_name in $(get_rdma_if_list) 00:29:45.747 16:20:16 -- nvmf/common.sh@86 -- # get_ip_address mlx_0_1 00:29:45.747 16:20:16 -- nvmf/common.sh@111 -- # interface=mlx_0_1 00:29:45.747 16:20:16 -- nvmf/common.sh@112 -- # ip -o -4 addr show mlx_0_1 00:29:45.747 16:20:16 -- nvmf/common.sh@112 -- # awk '{print $4}' 00:29:45.747 16:20:16 -- nvmf/common.sh@112 -- # cut -d/ -f1 00:29:45.747 16:20:16 -- nvmf/common.sh@444 -- # RDMA_IP_LIST='192.168.100.8 00:29:45.747 192.168.100.9' 00:29:45.747 16:20:16 -- nvmf/common.sh@445 -- # echo '192.168.100.8 00:29:45.747 192.168.100.9' 00:29:45.747 16:20:16 -- nvmf/common.sh@445 -- # head -n 1 00:29:45.747 16:20:16 -- nvmf/common.sh@445 -- # NVMF_FIRST_TARGET_IP=192.168.100.8 00:29:45.747 16:20:16 -- nvmf/common.sh@446 -- # echo '192.168.100.8 00:29:45.747 192.168.100.9' 00:29:45.747 16:20:16 -- nvmf/common.sh@446 -- # tail -n +2 00:29:45.747 16:20:16 -- nvmf/common.sh@446 -- # head -n 1 00:29:45.747 16:20:16 -- nvmf/common.sh@446 -- # NVMF_SECOND_TARGET_IP=192.168.100.9 00:29:45.747 16:20:16 -- nvmf/common.sh@447 -- # '[' -z 192.168.100.8 ']' 00:29:45.747 16:20:16 -- nvmf/common.sh@451 -- # NVMF_TRANSPORT_OPTS='-t rdma --num-shared-buffers 1024' 00:29:45.747 16:20:16 -- nvmf/common.sh@456 -- # '[' rdma == tcp ']' 00:29:45.747 16:20:16 -- nvmf/common.sh@456 -- # '[' rdma == rdma ']' 00:29:45.747 16:20:16 -- nvmf/common.sh@462 -- # modprobe nvme-rdma 00:29:45.747 16:20:16 -- spdkcli/nvmf.sh@24 -- # NVMF_TARGET_IP=192.168.100.8 00:29:45.747 16:20:16 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:29:45.747 16:20:16 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:45.747 16:20:16 -- common/autotest_common.sh@10 -- # set +x 00:29:45.747 16:20:16 -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:29:45.747 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:29:45.747 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:29:45.747 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:29:45.747 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:29:45.747 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:29:45.747 '\''nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:29:45.747 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:45.747 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:45.747 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4'\'' '\''192.168.100.8:4260'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4'\'' '\''192.168.100.8:4261'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4'\'' '\''192.168.100.8:4262'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:29:45.747 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:29:45.747 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:29:45.747 ' 00:29:46.006 [2024-11-20 16:20:16.705418] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:29:48.649 [2024-11-20 16:20:18.769902] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_0(0xd12930/0xd15180) succeed. 00:29:48.649 [2024-11-20 16:20:18.779919] rdma.c:2631:create_ib_device: *NOTICE*: Create IB device mlx5_1(0xd13fc0/0xd56820) succeed. 00:29:49.587 [2024-11-20 16:20:20.027364] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4260 *** 00:29:51.490 [2024-11-20 16:20:22.266558] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4261 *** 00:29:53.396 [2024-11-20 16:20:24.197036] rdma.c:3082:nvmf_rdma_listen: *NOTICE*: *** NVMe/RDMA Target Listening on 192.168.100.8 port 4262 *** 00:29:55.304 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:55.304 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:55.304 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:55.305 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:55.305 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:55.305 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:55.305 Executing command: ['nvmf/transport create rdma max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:55.305 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:55.305 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:55.305 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4260 IPv4', '192.168.100.8:4260', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4261 IPv4', '192.168.100.8:4261', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create rdma 192.168.100.8 4262 IPv4', '192.168.100.8:4262', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:55.305 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:55.305 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:55.305 16:20:25 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:55.305 16:20:25 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:55.305 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:55.305 16:20:25 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:55.305 16:20:25 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:55.305 16:20:25 -- common/autotest_common.sh@10 -- # set +x 00:29:55.305 16:20:25 -- spdkcli/nvmf.sh@69 -- # check_match 00:29:55.305 16:20:25 -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:55.564 16:20:26 -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:55.564 16:20:26 -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:55.564 16:20:26 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:55.564 16:20:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:55.564 16:20:26 -- common/autotest_common.sh@10 -- # set +x 00:29:55.564 16:20:26 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:55.564 16:20:26 -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:55.564 16:20:26 -- common/autotest_common.sh@10 -- # set +x 00:29:55.564 16:20:26 -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:55.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:55.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:55.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:55.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262'\'' '\''192.168.100.8:4262'\'' 00:29:55.564 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''192.168.100.8:4261'\'' 00:29:55.564 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:55.564 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:55.564 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:55.564 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:55.564 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:55.564 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:55.564 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:55.564 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:55.564 ' 00:30:00.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:00.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:00.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:00.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:00.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete rdma 192.168.100.8 4262', '192.168.100.8:4262', False] 00:30:00.838 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '192.168.100.8:4261', False] 00:30:00.838 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:00.838 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:00.838 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:00.838 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:00.838 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:00.838 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:00.838 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:00.838 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:00.838 16:20:31 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:00.838 16:20:31 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:00.838 16:20:31 -- common/autotest_common.sh@10 -- # set +x 00:30:00.838 16:20:31 -- spdkcli/nvmf.sh@90 -- # killprocess 1510696 00:30:00.838 16:20:31 -- common/autotest_common.sh@936 -- # '[' -z 1510696 ']' 00:30:00.838 16:20:31 -- common/autotest_common.sh@940 -- # kill -0 1510696 00:30:00.838 16:20:31 -- common/autotest_common.sh@941 -- # uname 00:30:00.838 16:20:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:00.838 16:20:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 1510696 00:30:00.838 16:20:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:00.838 16:20:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:00.838 16:20:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 1510696' 00:30:00.838 killing process with pid 1510696 00:30:00.838 16:20:31 -- common/autotest_common.sh@955 -- # kill 1510696 00:30:00.838 [2024-11-20 16:20:31.382233] app.c: 883:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:30:00.838 16:20:31 -- common/autotest_common.sh@960 -- # wait 1510696 00:30:00.838 16:20:31 -- spdkcli/nvmf.sh@1 -- # nvmftestfini 00:30:00.838 16:20:31 -- nvmf/common.sh@476 -- # nvmfcleanup 00:30:00.838 16:20:31 -- nvmf/common.sh@116 -- # sync 00:30:00.838 16:20:31 -- nvmf/common.sh@118 -- # '[' rdma == tcp ']' 00:30:00.838 16:20:31 -- nvmf/common.sh@118 -- # '[' rdma == rdma ']' 00:30:00.838 16:20:31 -- nvmf/common.sh@119 -- # set +e 00:30:00.838 16:20:31 -- nvmf/common.sh@120 -- # for i in {1..20} 00:30:00.838 16:20:31 -- nvmf/common.sh@121 -- # modprobe -v -r nvme-rdma 00:30:00.838 rmmod nvme_rdma 00:30:00.838 rmmod nvme_fabrics 00:30:01.097 16:20:31 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-fabrics 00:30:01.097 16:20:31 -- nvmf/common.sh@123 -- # set -e 00:30:01.097 16:20:31 -- nvmf/common.sh@124 -- # return 0 00:30:01.097 16:20:31 -- nvmf/common.sh@477 -- # '[' -n '' ']' 00:30:01.097 16:20:31 -- nvmf/common.sh@480 -- # '[' '' == iso ']' 00:30:01.097 16:20:31 -- nvmf/common.sh@483 -- # [[ rdma == \t\c\p ]] 00:30:01.097 00:30:01.097 real 0m23.308s 00:30:01.097 user 0m49.721s 00:30:01.097 sys 0m6.070s 00:30:01.097 16:20:31 -- common/autotest_common.sh@1115 -- # xtrace_disable 00:30:01.097 16:20:31 -- common/autotest_common.sh@10 -- # set +x 00:30:01.097 ************************************ 00:30:01.097 END TEST spdkcli_nvmf_rdma 00:30:01.097 ************************************ 00:30:01.097 16:20:31 -- spdk/autotest.sh@298 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@302 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:30:01.097 16:20:31 -- spdk/autotest.sh@353 -- # [[ 0 -eq 1 ]] 00:30:01.097 16:20:31 -- spdk/autotest.sh@357 -- # [[ 0 -eq 1 ]] 00:30:01.097 16:20:31 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:30:01.097 16:20:31 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:30:01.097 16:20:31 -- spdk/autotest.sh@370 -- # trap - SIGINT SIGTERM EXIT 00:30:01.097 16:20:31 -- spdk/autotest.sh@372 -- # timing_enter post_cleanup 00:30:01.097 16:20:31 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:01.097 16:20:31 -- common/autotest_common.sh@10 -- # set +x 00:30:01.098 16:20:31 -- spdk/autotest.sh@373 -- # autotest_cleanup 00:30:01.098 16:20:31 -- common/autotest_common.sh@1381 -- # local autotest_es=0 00:30:01.098 16:20:31 -- common/autotest_common.sh@1382 -- # xtrace_disable 00:30:01.098 16:20:31 -- common/autotest_common.sh@10 -- # set +x 00:30:07.667 INFO: APP EXITING 00:30:07.667 INFO: killing all VMs 00:30:07.667 INFO: killing vhost app 00:30:07.667 INFO: EXIT DONE 00:30:09.573 Waiting for block devices as requested 00:30:09.573 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:09.573 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:09.573 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:09.573 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:09.832 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:09.832 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:09.832 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:09.832 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:10.091 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:10.091 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:10.091 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:10.350 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:10.350 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:10.350 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:10.609 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:10.609 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:10.609 0000:d8:00.0 (8086 0a54): vfio-pci -> nvme 00:30:14.803 Cleaning 00:30:14.803 Removing: /var/run/dpdk/spdk0/config 00:30:14.803 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:14.803 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:14.803 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:14.803 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:14.803 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:14.803 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:14.803 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:14.803 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:14.803 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:14.803 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:14.803 Removing: /var/run/dpdk/spdk1/config 00:30:14.803 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:14.803 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:14.803 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:14.803 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:14.803 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:14.803 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:14.803 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:14.803 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:14.803 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:14.803 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:14.803 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:14.803 Removing: /var/run/dpdk/spdk2/config 00:30:14.803 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:14.803 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:14.803 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:14.803 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:14.803 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:14.803 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:14.803 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:14.803 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:14.803 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:14.803 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:14.803 Removing: /var/run/dpdk/spdk3/config 00:30:14.803 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:14.803 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:14.803 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:14.803 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:14.803 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:14.803 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:14.803 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:14.803 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:14.803 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:14.803 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:14.803 Removing: /var/run/dpdk/spdk4/config 00:30:14.803 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:14.803 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:14.803 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:14.803 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:14.803 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:14.803 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:14.803 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:14.803 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:14.803 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:14.803 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:14.803 Removing: /dev/shm/bdevperf_trace.pid1340386 00:30:14.803 Removing: /dev/shm/bdevperf_trace.pid1434707 00:30:14.803 Removing: /dev/shm/bdev_svc_trace.1 00:30:14.803 Removing: /dev/shm/nvmf_trace.0 00:30:14.803 Removing: /dev/shm/spdk_tgt_trace.pid1176636 00:30:14.803 Removing: /var/run/dpdk/spdk0 00:30:14.803 Removing: /var/run/dpdk/spdk1 00:30:14.803 Removing: /var/run/dpdk/spdk2 00:30:14.803 Removing: /var/run/dpdk/spdk3 00:30:14.803 Removing: /var/run/dpdk/spdk4 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1173901 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1175183 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1176636 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1177262 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1182304 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1184098 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1184798 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1185249 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1185598 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1185929 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1186179 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1186325 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1186593 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1187672 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1190771 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1191075 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1191422 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1191503 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1192077 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1192346 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1192870 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1192926 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1193227 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1193467 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1193551 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1193810 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1194298 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1194474 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1194805 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1195109 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1195138 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1195425 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1195577 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1195774 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1196025 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1196312 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1196580 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1196861 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1197098 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1197289 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1197447 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1197725 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1197997 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1198280 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1198547 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1198795 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1198938 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1199146 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1199410 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1199699 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1199965 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1200246 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1200467 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1200658 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1200830 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1201112 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1201386 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1201667 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1201935 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1202201 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1202351 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1202546 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1202795 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1203084 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1203353 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1203639 00:30:14.803 Removing: /var/run/dpdk/spdk_pid1203906 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1204110 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1204259 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1204505 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1204779 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1205061 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1205187 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1205479 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1209616 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1306700 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1310925 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1321380 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1326753 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1330483 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1331298 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1340386 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1340749 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1344804 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1351042 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1354217 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1364315 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1389138 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1392853 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1397958 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1432507 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1433472 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1434707 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1438859 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1446609 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1447512 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1448497 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1449329 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1449847 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1454134 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1454154 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1458709 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1459250 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1459858 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1460597 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1460641 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1463102 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1465068 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1466998 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1468940 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1470819 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1472706 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1478886 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1479517 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1481968 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1483057 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1490461 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1493330 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1498911 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1499189 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1505081 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1505535 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1507580 00:30:14.804 Removing: /var/run/dpdk/spdk_pid1510696 00:30:14.804 Clean 00:30:15.063 killing process with pid 1123453 00:30:33.155 killing process with pid 1123450 00:30:33.155 killing process with pid 1123452 00:30:33.155 killing process with pid 1123451 00:30:33.155 16:21:01 -- common/autotest_common.sh@1446 -- # return 0 00:30:33.155 16:21:01 -- spdk/autotest.sh@374 -- # timing_exit post_cleanup 00:30:33.155 16:21:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:33.155 16:21:01 -- common/autotest_common.sh@10 -- # set +x 00:30:33.155 16:21:01 -- spdk/autotest.sh@376 -- # timing_exit autotest 00:30:33.155 16:21:01 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:33.155 16:21:01 -- common/autotest_common.sh@10 -- # set +x 00:30:33.155 16:21:01 -- spdk/autotest.sh@377 -- # chmod a+r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:33.155 16:21:01 -- spdk/autotest.sh@379 -- # [[ -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log ]] 00:30:33.155 16:21:01 -- spdk/autotest.sh@379 -- # rm -f /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/udev.log 00:30:33.155 16:21:01 -- spdk/autotest.sh@381 -- # [[ y == y ]] 00:30:33.155 16:21:01 -- spdk/autotest.sh@383 -- # hostname 00:30:33.155 16:21:01 -- spdk/autotest.sh@383 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-phy-autotest/spdk -t spdk-wfp-21 -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info 00:30:33.155 geninfo: WARNING: invalid characters removed from testname! 00:30:51.288 16:21:20 -- spdk/autotest.sh@384 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:51.548 16:21:22 -- spdk/autotest.sh@385 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:53.487 16:21:23 -- spdk/autotest.sh@389 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:54.866 16:21:25 -- spdk/autotest.sh@390 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:56.245 16:21:26 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:58.149 16:21:28 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/cov_total.info 00:30:59.662 16:21:30 -- spdk/autotest.sh@393 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:30:59.662 16:21:30 -- common/autotest_common.sh@1689 -- $ [[ y == y ]] 00:30:59.662 16:21:30 -- common/autotest_common.sh@1690 -- $ lcov --version 00:30:59.662 16:21:30 -- common/autotest_common.sh@1690 -- $ awk '{print $NF}' 00:30:59.662 16:21:30 -- common/autotest_common.sh@1690 -- $ lt 1.15 2 00:30:59.662 16:21:30 -- scripts/common.sh@372 -- $ cmp_versions 1.15 '<' 2 00:30:59.662 16:21:30 -- scripts/common.sh@332 -- $ local ver1 ver1_l 00:30:59.662 16:21:30 -- scripts/common.sh@333 -- $ local ver2 ver2_l 00:30:59.662 16:21:30 -- scripts/common.sh@335 -- $ IFS=.-: 00:30:59.662 16:21:30 -- scripts/common.sh@335 -- $ read -ra ver1 00:30:59.662 16:21:30 -- scripts/common.sh@336 -- $ IFS=.-: 00:30:59.662 16:21:30 -- scripts/common.sh@336 -- $ read -ra ver2 00:30:59.663 16:21:30 -- scripts/common.sh@337 -- $ local 'op=<' 00:30:59.663 16:21:30 -- scripts/common.sh@339 -- $ ver1_l=2 00:30:59.663 16:21:30 -- scripts/common.sh@340 -- $ ver2_l=1 00:30:59.663 16:21:30 -- scripts/common.sh@342 -- $ local lt=0 gt=0 eq=0 v 00:30:59.663 16:21:30 -- scripts/common.sh@343 -- $ case "$op" in 00:30:59.663 16:21:30 -- scripts/common.sh@344 -- $ : 1 00:30:59.663 16:21:30 -- scripts/common.sh@363 -- $ (( v = 0 )) 00:30:59.663 16:21:30 -- scripts/common.sh@363 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.663 16:21:30 -- scripts/common.sh@364 -- $ decimal 1 00:30:59.663 16:21:30 -- scripts/common.sh@352 -- $ local d=1 00:30:59.663 16:21:30 -- scripts/common.sh@353 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:30:59.663 16:21:30 -- scripts/common.sh@354 -- $ echo 1 00:30:59.663 16:21:30 -- scripts/common.sh@364 -- $ ver1[v]=1 00:30:59.663 16:21:30 -- scripts/common.sh@365 -- $ decimal 2 00:30:59.663 16:21:30 -- scripts/common.sh@352 -- $ local d=2 00:30:59.663 16:21:30 -- scripts/common.sh@353 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:30:59.663 16:21:30 -- scripts/common.sh@354 -- $ echo 2 00:30:59.663 16:21:30 -- scripts/common.sh@365 -- $ ver2[v]=2 00:30:59.663 16:21:30 -- scripts/common.sh@366 -- $ (( ver1[v] > ver2[v] )) 00:30:59.663 16:21:30 -- scripts/common.sh@367 -- $ (( ver1[v] < ver2[v] )) 00:30:59.663 16:21:30 -- scripts/common.sh@367 -- $ return 0 00:30:59.663 16:21:30 -- common/autotest_common.sh@1691 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.663 16:21:30 -- common/autotest_common.sh@1703 -- $ export 'LCOV_OPTS= 00:30:59.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.663 --rc genhtml_branch_coverage=1 00:30:59.663 --rc genhtml_function_coverage=1 00:30:59.663 --rc genhtml_legend=1 00:30:59.663 --rc geninfo_all_blocks=1 00:30:59.663 --rc geninfo_unexecuted_blocks=1 00:30:59.663 00:30:59.663 ' 00:30:59.663 16:21:30 -- common/autotest_common.sh@1703 -- $ LCOV_OPTS=' 00:30:59.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.663 --rc genhtml_branch_coverage=1 00:30:59.663 --rc genhtml_function_coverage=1 00:30:59.663 --rc genhtml_legend=1 00:30:59.663 --rc geninfo_all_blocks=1 00:30:59.663 --rc geninfo_unexecuted_blocks=1 00:30:59.663 00:30:59.663 ' 00:30:59.663 16:21:30 -- common/autotest_common.sh@1704 -- $ export 'LCOV=lcov 00:30:59.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.663 --rc genhtml_branch_coverage=1 00:30:59.663 --rc genhtml_function_coverage=1 00:30:59.663 --rc genhtml_legend=1 00:30:59.663 --rc geninfo_all_blocks=1 00:30:59.663 --rc geninfo_unexecuted_blocks=1 00:30:59.663 00:30:59.663 ' 00:30:59.663 16:21:30 -- common/autotest_common.sh@1704 -- $ LCOV='lcov 00:30:59.663 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.663 --rc genhtml_branch_coverage=1 00:30:59.663 --rc genhtml_function_coverage=1 00:30:59.663 --rc genhtml_legend=1 00:30:59.663 --rc geninfo_all_blocks=1 00:30:59.663 --rc geninfo_unexecuted_blocks=1 00:30:59.663 00:30:59.663 ' 00:30:59.663 16:21:30 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-phy-autotest/spdk/scripts/common.sh 00:30:59.663 16:21:30 -- scripts/common.sh@433 -- $ [[ -e /bin/wpdk_common.sh ]] 00:30:59.663 16:21:30 -- scripts/common.sh@441 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:59.663 16:21:30 -- scripts/common.sh@442 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:59.663 16:21:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.663 16:21:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.663 16:21:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.663 16:21:30 -- paths/export.sh@5 -- $ export PATH 00:30:59.663 16:21:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:59.663 16:21:30 -- common/autobuild_common.sh@439 -- $ out=/var/jenkins/workspace/nvmf-phy-autotest/spdk/../output 00:30:59.663 16:21:30 -- common/autobuild_common.sh@440 -- $ date +%s 00:30:59.663 16:21:30 -- common/autobuild_common.sh@440 -- $ mktemp -dt spdk_1732116090.XXXXXX 00:30:59.663 16:21:30 -- common/autobuild_common.sh@440 -- $ SPDK_WORKSPACE=/tmp/spdk_1732116090.5lj9Ng 00:30:59.663 16:21:30 -- common/autobuild_common.sh@442 -- $ [[ -n '' ]] 00:30:59.663 16:21:30 -- common/autobuild_common.sh@446 -- $ '[' -n v22.11.4 ']' 00:30:59.663 16:21:30 -- common/autobuild_common.sh@447 -- $ dirname /var/jenkins/workspace/nvmf-phy-autotest/dpdk/build 00:30:59.663 16:21:30 -- common/autobuild_common.sh@447 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk' 00:30:59.663 16:21:30 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp' 00:30:59.663 16:21:30 -- common/autobuild_common.sh@455 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:30:59.663 16:21:30 -- common/autobuild_common.sh@456 -- $ get_config_params 00:30:59.663 16:21:30 -- common/autotest_common.sh@397 -- $ xtrace_disable 00:30:59.663 16:21:30 -- common/autotest_common.sh@10 -- $ set +x 00:30:59.663 16:21:30 -- common/autobuild_common.sh@456 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-dpdk=/var/jenkins/workspace/nvmf-phy-autotest/dpdk/build' 00:30:59.663 16:21:30 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j112 00:30:59.663 16:21:30 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:30:59.663 16:21:30 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:30:59.663 16:21:30 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:30:59.663 16:21:30 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:30:59.663 16:21:30 -- spdk/autopackage.sh@19 -- $ timing_finish 00:30:59.663 16:21:30 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:30:59.663 16:21:30 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:30:59.663 16:21:30 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-phy-autotest/spdk/../output/timing.txt 00:30:59.663 16:21:30 -- spdk/autopackage.sh@20 -- $ exit 0 00:30:59.663 + [[ -n 1069700 ]] 00:30:59.663 + sudo kill 1069700 00:30:59.674 [Pipeline] } 00:30:59.690 [Pipeline] // stage 00:30:59.695 [Pipeline] } 00:30:59.714 [Pipeline] // timeout 00:30:59.720 [Pipeline] } 00:30:59.738 [Pipeline] // catchError 00:30:59.744 [Pipeline] } 00:30:59.762 [Pipeline] // wrap 00:30:59.769 [Pipeline] } 00:30:59.783 [Pipeline] // catchError 00:30:59.793 [Pipeline] stage 00:30:59.795 [Pipeline] { (Epilogue) 00:30:59.812 [Pipeline] catchError 00:30:59.814 [Pipeline] { 00:30:59.832 [Pipeline] echo 00:30:59.835 Cleanup processes 00:30:59.842 [Pipeline] sh 00:31:00.130 + sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:00.130 1532283 sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:00.145 [Pipeline] sh 00:31:00.431 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-phy-autotest/spdk 00:31:00.432 ++ grep -v 'sudo pgrep' 00:31:00.432 ++ awk '{print $1}' 00:31:00.432 + sudo kill -9 00:31:00.432 + true 00:31:00.445 [Pipeline] sh 00:31:00.732 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:00.732 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:31:07.303 xz: Reduced the number of threads from 112 to 89 to not exceed the memory usage limit of 14,718 MiB 00:31:10.607 [Pipeline] sh 00:31:10.895 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:10.895 Artifacts sizes are good 00:31:10.912 [Pipeline] archiveArtifacts 00:31:10.921 Archiving artifacts 00:31:11.066 [Pipeline] sh 00:31:11.353 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-phy-autotest 00:31:11.368 [Pipeline] cleanWs 00:31:11.378 [WS-CLEANUP] Deleting project workspace... 00:31:11.378 [WS-CLEANUP] Deferred wipeout is used... 00:31:11.385 [WS-CLEANUP] done 00:31:11.387 [Pipeline] } 00:31:11.404 [Pipeline] // catchError 00:31:11.419 [Pipeline] sh 00:31:11.707 + logger -p user.info -t JENKINS-CI 00:31:11.716 [Pipeline] } 00:31:11.731 [Pipeline] // stage 00:31:11.736 [Pipeline] } 00:31:11.751 [Pipeline] // node 00:31:11.757 [Pipeline] End of Pipeline 00:31:11.808 Finished: SUCCESS